00:00:00.000 Started by upstream project "autotest-per-patch" build number 132818 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.064 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.064 The recommended git tool is: git 00:00:00.064 using credential 00000000-0000-0000-0000-000000000002 00:00:00.066 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.117 Fetching changes from the remote Git repository 00:00:00.119 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.187 Using shallow fetch with depth 1 00:00:00.187 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.187 > git --version # timeout=10 00:00:00.249 > git --version # 'git version 2.39.2' 00:00:00.249 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.291 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.291 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.159 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.171 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.183 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.183 > git config core.sparsecheckout # timeout=10 00:00:07.195 > git read-tree -mu HEAD # timeout=10 00:00:07.214 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.241 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.241 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.361 [Pipeline] Start of Pipeline 00:00:07.373 [Pipeline] library 00:00:07.375 Loading library shm_lib@master 00:00:07.375 Library shm_lib@master is cached. Copying from home. 00:00:07.389 [Pipeline] node 00:00:07.404 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.405 [Pipeline] { 00:00:07.414 [Pipeline] catchError 00:00:07.415 [Pipeline] { 00:00:07.425 [Pipeline] wrap 00:00:07.433 [Pipeline] { 00:00:07.440 [Pipeline] stage 00:00:07.441 [Pipeline] { (Prologue) 00:00:07.667 [Pipeline] sh 00:00:07.951 + logger -p user.info -t JENKINS-CI 00:00:07.966 [Pipeline] echo 00:00:07.967 Node: GP11 00:00:07.973 [Pipeline] sh 00:00:08.273 [Pipeline] setCustomBuildProperty 00:00:08.284 [Pipeline] echo 00:00:08.285 Cleanup processes 00:00:08.290 [Pipeline] sh 00:00:08.576 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.576 2191406 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.587 [Pipeline] sh 00:00:08.869 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.869 ++ grep -v 'sudo pgrep' 00:00:08.869 ++ awk '{print $1}' 00:00:08.869 + sudo kill -9 00:00:08.869 + true 00:00:08.881 [Pipeline] cleanWs 00:00:08.888 [WS-CLEANUP] Deleting project workspace... 00:00:08.888 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.894 [WS-CLEANUP] done 00:00:08.897 [Pipeline] setCustomBuildProperty 00:00:08.907 [Pipeline] sh 00:00:09.187 + sudo git config --global --replace-all safe.directory '*' 00:00:09.274 [Pipeline] httpRequest 00:00:09.834 [Pipeline] echo 00:00:09.836 Sorcerer 10.211.164.112 is alive 00:00:09.845 [Pipeline] retry 00:00:09.847 [Pipeline] { 00:00:09.859 [Pipeline] httpRequest 00:00:09.865 HttpMethod: GET 00:00:09.865 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.866 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.878 Response Code: HTTP/1.1 200 OK 00:00:09.878 Success: Status code 200 is in the accepted range: 200,404 00:00:09.878 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:21.543 [Pipeline] } 00:00:21.559 [Pipeline] // retry 00:00:21.566 [Pipeline] sh 00:00:21.852 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:21.869 [Pipeline] httpRequest 00:00:22.280 [Pipeline] echo 00:00:22.282 Sorcerer 10.211.164.112 is alive 00:00:22.291 [Pipeline] retry 00:00:22.293 [Pipeline] { 00:00:22.307 [Pipeline] httpRequest 00:00:22.312 HttpMethod: GET 00:00:22.312 URL: http://10.211.164.112/packages/spdk_86d35c37afb5a441206b26f894d7511170c8c587.tar.gz 00:00:22.314 Sending request to url: http://10.211.164.112/packages/spdk_86d35c37afb5a441206b26f894d7511170c8c587.tar.gz 00:00:22.334 Response Code: HTTP/1.1 200 OK 00:00:22.334 Success: Status code 200 is in the accepted range: 200,404 00:00:22.334 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_86d35c37afb5a441206b26f894d7511170c8c587.tar.gz 00:01:25.850 [Pipeline] } 00:01:25.865 [Pipeline] // retry 00:01:25.871 [Pipeline] sh 00:01:26.159 + tar --no-same-owner -xf spdk_86d35c37afb5a441206b26f894d7511170c8c587.tar.gz 00:01:28.714 [Pipeline] sh 00:01:29.000 + git -C spdk log --oneline -n5 00:01:29.000 86d35c37a bdev: simplify bdev_reset_freeze_channel 00:01:29.000 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:29.000 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:01:29.000 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:01:29.000 0ea9ac02f accel/mlx5: Create pool of UMRs 00:01:29.097 [Pipeline] } 00:01:29.111 [Pipeline] // stage 00:01:29.120 [Pipeline] stage 00:01:29.122 [Pipeline] { (Prepare) 00:01:29.137 [Pipeline] writeFile 00:01:29.151 [Pipeline] sh 00:01:29.436 + logger -p user.info -t JENKINS-CI 00:01:29.449 [Pipeline] sh 00:01:29.734 + logger -p user.info -t JENKINS-CI 00:01:29.746 [Pipeline] sh 00:01:30.032 + cat autorun-spdk.conf 00:01:30.032 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.032 SPDK_TEST_NVMF=1 00:01:30.032 SPDK_TEST_NVME_CLI=1 00:01:30.032 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.032 SPDK_TEST_NVMF_NICS=e810 00:01:30.032 SPDK_TEST_VFIOUSER=1 00:01:30.032 SPDK_RUN_UBSAN=1 00:01:30.032 NET_TYPE=phy 00:01:30.040 RUN_NIGHTLY=0 00:01:30.044 [Pipeline] readFile 00:01:30.068 [Pipeline] withEnv 00:01:30.070 [Pipeline] { 00:01:30.082 [Pipeline] sh 00:01:30.401 + set -ex 00:01:30.402 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:30.402 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:30.402 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.402 ++ SPDK_TEST_NVMF=1 00:01:30.402 ++ SPDK_TEST_NVME_CLI=1 00:01:30.402 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.402 ++ SPDK_TEST_NVMF_NICS=e810 00:01:30.402 ++ SPDK_TEST_VFIOUSER=1 00:01:30.402 ++ SPDK_RUN_UBSAN=1 00:01:30.402 ++ NET_TYPE=phy 00:01:30.402 ++ RUN_NIGHTLY=0 00:01:30.402 + case $SPDK_TEST_NVMF_NICS in 00:01:30.402 + DRIVERS=ice 00:01:30.402 + [[ tcp == \r\d\m\a ]] 00:01:30.402 + [[ -n ice ]] 00:01:30.402 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:30.402 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:30.402 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:30.402 rmmod: ERROR: Module irdma is not currently loaded 00:01:30.402 rmmod: ERROR: Module i40iw is not currently loaded 00:01:30.402 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:30.402 + true 00:01:30.402 + for D in $DRIVERS 00:01:30.402 + sudo modprobe ice 00:01:30.402 + exit 0 00:01:30.439 [Pipeline] } 00:01:30.453 [Pipeline] // withEnv 00:01:30.458 [Pipeline] } 00:01:30.471 [Pipeline] // stage 00:01:30.480 [Pipeline] catchError 00:01:30.482 [Pipeline] { 00:01:30.495 [Pipeline] timeout 00:01:30.495 Timeout set to expire in 1 hr 0 min 00:01:30.496 [Pipeline] { 00:01:30.510 [Pipeline] stage 00:01:30.512 [Pipeline] { (Tests) 00:01:30.525 [Pipeline] sh 00:01:30.811 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.811 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.811 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.811 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:30.811 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:30.811 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:30.811 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:30.811 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:30.811 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:30.811 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:30.811 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:30.811 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.811 + source /etc/os-release 00:01:30.811 ++ NAME='Fedora Linux' 00:01:30.811 ++ VERSION='39 (Cloud Edition)' 00:01:30.811 ++ ID=fedora 00:01:30.811 ++ VERSION_ID=39 00:01:30.811 ++ VERSION_CODENAME= 00:01:30.811 ++ PLATFORM_ID=platform:f39 00:01:30.811 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:30.811 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:30.811 ++ LOGO=fedora-logo-icon 00:01:30.811 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:30.811 ++ HOME_URL=https://fedoraproject.org/ 00:01:30.811 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:30.811 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:30.811 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:30.811 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:30.811 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:30.811 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:30.811 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:30.811 ++ SUPPORT_END=2024-11-12 00:01:30.811 ++ VARIANT='Cloud Edition' 00:01:30.811 ++ VARIANT_ID=cloud 00:01:30.811 + uname -a 00:01:30.811 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:30.811 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:31.749 Hugepages 00:01:31.749 node hugesize free / total 00:01:31.749 node0 1048576kB 0 / 0 00:01:31.749 node0 2048kB 0 / 0 00:01:31.749 node1 1048576kB 0 / 0 00:01:31.749 node1 2048kB 0 / 0 00:01:31.749 00:01:31.749 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:31.749 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:31.749 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:31.749 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:31.749 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:31.749 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:31.749 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:31.749 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:31.749 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:31.749 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:31.749 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:31.749 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:31.749 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:31.749 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:31.749 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:31.749 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:31.749 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:31.749 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:32.009 + rm -f /tmp/spdk-ld-path 00:01:32.009 + source autorun-spdk.conf 00:01:32.009 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.009 ++ SPDK_TEST_NVMF=1 00:01:32.009 ++ SPDK_TEST_NVME_CLI=1 00:01:32.009 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.009 ++ SPDK_TEST_NVMF_NICS=e810 00:01:32.009 ++ SPDK_TEST_VFIOUSER=1 00:01:32.009 ++ SPDK_RUN_UBSAN=1 00:01:32.009 ++ NET_TYPE=phy 00:01:32.009 ++ RUN_NIGHTLY=0 00:01:32.009 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:32.009 + [[ -n '' ]] 00:01:32.009 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.009 + for M in /var/spdk/build-*-manifest.txt 00:01:32.009 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:32.009 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.009 + for M in /var/spdk/build-*-manifest.txt 00:01:32.009 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:32.009 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.009 + for M in /var/spdk/build-*-manifest.txt 00:01:32.009 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:32.009 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.009 ++ uname 00:01:32.009 + [[ Linux == \L\i\n\u\x ]] 00:01:32.009 + sudo dmesg -T 00:01:32.009 + sudo dmesg --clear 00:01:32.009 + dmesg_pid=2192607 00:01:32.009 + [[ Fedora Linux == FreeBSD ]] 00:01:32.009 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.009 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.009 + sudo dmesg -Tw 00:01:32.009 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:32.009 + [[ -x /usr/src/fio-static/fio ]] 00:01:32.009 + export FIO_BIN=/usr/src/fio-static/fio 00:01:32.009 + FIO_BIN=/usr/src/fio-static/fio 00:01:32.009 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:32.009 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:32.009 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:32.009 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.009 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.009 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:32.009 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.009 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.009 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.009 03:49:26 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:32.009 03:49:26 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.009 03:49:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.009 03:49:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:32.009 03:49:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:32.009 03:49:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.009 03:49:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:32.009 03:49:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:32.009 03:49:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:32.009 03:49:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:32.009 03:49:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:32.009 03:49:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:32.009 03:49:26 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.009 03:49:26 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:32.009 03:49:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:32.009 03:49:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:32.009 03:49:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:32.009 03:49:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:32.009 03:49:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:32.009 03:49:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.009 03:49:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.009 03:49:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.009 03:49:26 -- paths/export.sh@5 -- $ export PATH 00:01:32.009 03:49:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.009 03:49:26 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:32.009 03:49:26 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:32.009 03:49:26 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733798966.XXXXXX 00:01:32.009 03:49:26 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733798966.SmMppQ 00:01:32.009 03:49:26 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:32.009 03:49:26 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:32.009 03:49:26 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:32.009 03:49:26 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:32.009 03:49:26 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:32.009 03:49:26 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:32.009 03:49:26 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:32.009 03:49:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.009 03:49:26 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:32.009 03:49:26 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:32.009 03:49:26 -- pm/common@17 -- $ local monitor 00:01:32.009 03:49:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.009 03:49:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.009 03:49:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.009 03:49:26 -- pm/common@21 -- $ date +%s 00:01:32.009 03:49:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.009 03:49:26 -- pm/common@21 -- $ date +%s 00:01:32.009 03:49:26 -- pm/common@25 -- $ sleep 1 00:01:32.010 03:49:26 -- pm/common@21 -- $ date +%s 00:01:32.010 03:49:26 -- pm/common@21 -- $ date +%s 00:01:32.010 03:49:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733798966 00:01:32.010 03:49:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733798966 00:01:32.010 03:49:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733798966 00:01:32.010 03:49:26 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733798966 00:01:32.010 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733798966_collect-vmstat.pm.log 00:01:32.010 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733798966_collect-cpu-load.pm.log 00:01:32.010 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733798966_collect-cpu-temp.pm.log 00:01:32.010 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733798966_collect-bmc-pm.bmc.pm.log 00:01:32.948 03:49:27 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:32.948 03:49:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:32.948 03:49:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:32.948 03:49:27 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.948 03:49:27 -- spdk/autobuild.sh@16 -- $ date -u 00:01:33.207 Tue Dec 10 02:49:27 AM UTC 2024 00:01:33.207 03:49:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:33.207 v25.01-pre-312-g86d35c37a 00:01:33.207 03:49:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:33.207 03:49:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:33.207 03:49:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:33.207 03:49:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:33.207 03:49:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:33.207 03:49:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.207 ************************************ 00:01:33.207 START TEST ubsan 00:01:33.207 ************************************ 00:01:33.207 03:49:27 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:33.207 using ubsan 00:01:33.207 00:01:33.207 real 0m0.000s 00:01:33.207 user 0m0.000s 00:01:33.207 sys 0m0.000s 00:01:33.207 03:49:27 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:33.207 03:49:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:33.207 ************************************ 00:01:33.207 END TEST ubsan 00:01:33.207 ************************************ 00:01:33.207 03:49:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:33.207 03:49:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:33.207 03:49:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:33.207 03:49:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:33.207 03:49:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:33.207 03:49:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:33.207 03:49:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:33.207 03:49:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:33.207 03:49:27 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:33.207 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:33.207 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:33.466 Using 'verbs' RDMA provider 00:01:44.015 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:54.001 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:54.260 Creating mk/config.mk...done. 00:01:54.260 Creating mk/cc.flags.mk...done. 00:01:54.260 Type 'make' to build. 00:01:54.260 03:49:48 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:54.260 03:49:48 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:54.260 03:49:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:54.260 03:49:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.519 ************************************ 00:01:54.519 START TEST make 00:01:54.519 ************************************ 00:01:54.519 03:49:48 make -- common/autotest_common.sh@1129 -- $ make -j48 00:01:54.780 make[1]: Nothing to be done for 'all'. 00:01:56.702 The Meson build system 00:01:56.702 Version: 1.5.0 00:01:56.702 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:56.702 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:56.702 Build type: native build 00:01:56.702 Project name: libvfio-user 00:01:56.702 Project version: 0.0.1 00:01:56.702 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:56.702 C linker for the host machine: cc ld.bfd 2.40-14 00:01:56.702 Host machine cpu family: x86_64 00:01:56.702 Host machine cpu: x86_64 00:01:56.702 Run-time dependency threads found: YES 00:01:56.702 Library dl found: YES 00:01:56.702 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:56.702 Run-time dependency json-c found: YES 0.17 00:01:56.702 Run-time dependency cmocka found: YES 1.1.7 00:01:56.702 Program pytest-3 found: NO 00:01:56.702 Program flake8 found: NO 00:01:56.702 Program misspell-fixer found: NO 00:01:56.702 Program restructuredtext-lint found: NO 00:01:56.702 Program valgrind found: YES (/usr/bin/valgrind) 00:01:56.702 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:56.702 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:56.702 Compiler for C supports arguments -Wwrite-strings: YES 00:01:56.702 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:56.702 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:56.702 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:56.702 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:56.702 Build targets in project: 8 00:01:56.702 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:56.702 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:56.702 00:01:56.702 libvfio-user 0.0.1 00:01:56.702 00:01:56.702 User defined options 00:01:56.702 buildtype : debug 00:01:56.702 default_library: shared 00:01:56.702 libdir : /usr/local/lib 00:01:56.702 00:01:56.702 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.276 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:57.539 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:57.539 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:57.539 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:57.539 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:57.539 [5/37] Compiling C object samples/null.p/null.c.o 00:01:57.539 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:57.539 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:57.539 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:57.539 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:57.539 [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:57.539 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:57.539 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:57.539 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:57.539 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:57.539 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:57.801 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:57.801 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:57.801 [18/37] Compiling C object samples/server.p/server.c.o 00:01:57.801 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:57.801 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:57.801 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:57.801 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:57.801 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:57.801 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:57.801 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:57.801 [26/37] Compiling C object samples/client.p/client.c.o 00:01:57.801 [27/37] Linking target samples/client 00:01:57.801 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:57.801 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:57.801 [30/37] Linking target test/unit_tests 00:01:57.801 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:58.062 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:58.062 [33/37] Linking target samples/lspci 00:01:58.062 [34/37] Linking target samples/gpio-pci-idio-16 00:01:58.062 [35/37] Linking target samples/null 00:01:58.324 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:58.324 [37/37] Linking target samples/server 00:01:58.324 INFO: autodetecting backend as ninja 00:01:58.324 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:58.324 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:59.267 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:59.267 ninja: no work to do. 00:02:04.536 The Meson build system 00:02:04.536 Version: 1.5.0 00:02:04.536 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:04.536 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:04.536 Build type: native build 00:02:04.536 Program cat found: YES (/usr/bin/cat) 00:02:04.536 Project name: DPDK 00:02:04.536 Project version: 24.03.0 00:02:04.536 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:04.536 C linker for the host machine: cc ld.bfd 2.40-14 00:02:04.536 Host machine cpu family: x86_64 00:02:04.536 Host machine cpu: x86_64 00:02:04.536 Message: ## Building in Developer Mode ## 00:02:04.536 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:04.536 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:04.536 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:04.536 Program python3 found: YES (/usr/bin/python3) 00:02:04.536 Program cat found: YES (/usr/bin/cat) 00:02:04.536 Compiler for C supports arguments -march=native: YES 00:02:04.536 Checking for size of "void *" : 8 00:02:04.536 Checking for size of "void *" : 8 (cached) 00:02:04.536 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:04.536 Library m found: YES 00:02:04.536 Library numa found: YES 00:02:04.536 Has header "numaif.h" : YES 00:02:04.536 Library fdt found: NO 00:02:04.536 Library execinfo found: NO 00:02:04.536 Has header "execinfo.h" : YES 00:02:04.536 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:04.536 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:04.536 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:04.536 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:04.536 Run-time dependency openssl found: YES 3.1.1 00:02:04.536 Run-time dependency libpcap found: YES 1.10.4 00:02:04.536 Has header "pcap.h" with dependency libpcap: YES 00:02:04.536 Compiler for C supports arguments -Wcast-qual: YES 00:02:04.536 Compiler for C supports arguments -Wdeprecated: YES 00:02:04.536 Compiler for C supports arguments -Wformat: YES 00:02:04.536 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:04.536 Compiler for C supports arguments -Wformat-security: NO 00:02:04.536 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:04.536 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:04.536 Compiler for C supports arguments -Wnested-externs: YES 00:02:04.536 Compiler for C supports arguments -Wold-style-definition: YES 00:02:04.536 Compiler for C supports arguments -Wpointer-arith: YES 00:02:04.536 Compiler for C supports arguments -Wsign-compare: YES 00:02:04.536 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:04.536 Compiler for C supports arguments -Wundef: YES 00:02:04.536 Compiler for C supports arguments -Wwrite-strings: YES 00:02:04.536 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:04.536 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:04.536 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:04.536 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:04.536 Program objdump found: YES (/usr/bin/objdump) 00:02:04.536 Compiler for C supports arguments -mavx512f: YES 00:02:04.536 Checking if "AVX512 checking" compiles: YES 00:02:04.536 Fetching value of define "__SSE4_2__" : 1 00:02:04.536 Fetching value of define "__AES__" : 1 00:02:04.536 Fetching value of define "__AVX__" : 1 00:02:04.536 Fetching value of define "__AVX2__" : (undefined) 00:02:04.536 Fetching value of define "__AVX512BW__" : (undefined) 00:02:04.536 Fetching value of define "__AVX512CD__" : (undefined) 00:02:04.536 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:04.536 Fetching value of define "__AVX512F__" : (undefined) 00:02:04.536 Fetching value of define "__AVX512VL__" : (undefined) 00:02:04.536 Fetching value of define "__PCLMUL__" : 1 00:02:04.536 Fetching value of define "__RDRND__" : 1 00:02:04.536 Fetching value of define "__RDSEED__" : (undefined) 00:02:04.536 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:04.536 Fetching value of define "__znver1__" : (undefined) 00:02:04.536 Fetching value of define "__znver2__" : (undefined) 00:02:04.536 Fetching value of define "__znver3__" : (undefined) 00:02:04.536 Fetching value of define "__znver4__" : (undefined) 00:02:04.536 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:04.536 Message: lib/log: Defining dependency "log" 00:02:04.536 Message: lib/kvargs: Defining dependency "kvargs" 00:02:04.536 Message: lib/telemetry: Defining dependency "telemetry" 00:02:04.536 Checking for function "getentropy" : NO 00:02:04.536 Message: lib/eal: Defining dependency "eal" 00:02:04.536 Message: lib/ring: Defining dependency "ring" 00:02:04.536 Message: lib/rcu: Defining dependency "rcu" 00:02:04.536 Message: lib/mempool: Defining dependency "mempool" 00:02:04.536 Message: lib/mbuf: Defining dependency "mbuf" 00:02:04.536 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:04.536 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:04.536 Compiler for C supports arguments -mpclmul: YES 00:02:04.536 Compiler for C supports arguments -maes: YES 00:02:04.536 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:04.536 Compiler for C supports arguments -mavx512bw: YES 00:02:04.536 Compiler for C supports arguments -mavx512dq: YES 00:02:04.536 Compiler for C supports arguments -mavx512vl: YES 00:02:04.536 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:04.536 Compiler for C supports arguments -mavx2: YES 00:02:04.536 Compiler for C supports arguments -mavx: YES 00:02:04.536 Message: lib/net: Defining dependency "net" 00:02:04.536 Message: lib/meter: Defining dependency "meter" 00:02:04.536 Message: lib/ethdev: Defining dependency "ethdev" 00:02:04.536 Message: lib/pci: Defining dependency "pci" 00:02:04.536 Message: lib/cmdline: Defining dependency "cmdline" 00:02:04.536 Message: lib/hash: Defining dependency "hash" 00:02:04.536 Message: lib/timer: Defining dependency "timer" 00:02:04.536 Message: lib/compressdev: Defining dependency "compressdev" 00:02:04.536 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:04.536 Message: lib/dmadev: Defining dependency "dmadev" 00:02:04.536 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:04.536 Message: lib/power: Defining dependency "power" 00:02:04.536 Message: lib/reorder: Defining dependency "reorder" 00:02:04.536 Message: lib/security: Defining dependency "security" 00:02:04.536 Has header "linux/userfaultfd.h" : YES 00:02:04.536 Has header "linux/vduse.h" : YES 00:02:04.536 Message: lib/vhost: Defining dependency "vhost" 00:02:04.536 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:04.536 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:04.536 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:04.536 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:04.536 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:04.536 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:04.536 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:04.536 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:04.536 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:04.536 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:04.536 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:04.536 Configuring doxy-api-html.conf using configuration 00:02:04.536 Configuring doxy-api-man.conf using configuration 00:02:04.536 Program mandb found: YES (/usr/bin/mandb) 00:02:04.536 Program sphinx-build found: NO 00:02:04.536 Configuring rte_build_config.h using configuration 00:02:04.536 Message: 00:02:04.536 ================= 00:02:04.536 Applications Enabled 00:02:04.536 ================= 00:02:04.536 00:02:04.536 apps: 00:02:04.536 00:02:04.536 00:02:04.536 Message: 00:02:04.536 ================= 00:02:04.536 Libraries Enabled 00:02:04.536 ================= 00:02:04.536 00:02:04.536 libs: 00:02:04.536 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:04.536 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:04.536 cryptodev, dmadev, power, reorder, security, vhost, 00:02:04.536 00:02:04.536 Message: 00:02:04.536 =============== 00:02:04.536 Drivers Enabled 00:02:04.536 =============== 00:02:04.536 00:02:04.536 common: 00:02:04.536 00:02:04.536 bus: 00:02:04.536 pci, vdev, 00:02:04.536 mempool: 00:02:04.536 ring, 00:02:04.536 dma: 00:02:04.536 00:02:04.536 net: 00:02:04.536 00:02:04.536 crypto: 00:02:04.536 00:02:04.536 compress: 00:02:04.536 00:02:04.536 vdpa: 00:02:04.536 00:02:04.536 00:02:04.536 Message: 00:02:04.536 ================= 00:02:04.536 Content Skipped 00:02:04.536 ================= 00:02:04.536 00:02:04.536 apps: 00:02:04.536 dumpcap: explicitly disabled via build config 00:02:04.536 graph: explicitly disabled via build config 00:02:04.537 pdump: explicitly disabled via build config 00:02:04.537 proc-info: explicitly disabled via build config 00:02:04.537 test-acl: explicitly disabled via build config 00:02:04.537 test-bbdev: explicitly disabled via build config 00:02:04.537 test-cmdline: explicitly disabled via build config 00:02:04.537 test-compress-perf: explicitly disabled via build config 00:02:04.537 test-crypto-perf: explicitly disabled via build config 00:02:04.537 test-dma-perf: explicitly disabled via build config 00:02:04.537 test-eventdev: explicitly disabled via build config 00:02:04.537 test-fib: explicitly disabled via build config 00:02:04.537 test-flow-perf: explicitly disabled via build config 00:02:04.537 test-gpudev: explicitly disabled via build config 00:02:04.537 test-mldev: explicitly disabled via build config 00:02:04.537 test-pipeline: explicitly disabled via build config 00:02:04.537 test-pmd: explicitly disabled via build config 00:02:04.537 test-regex: explicitly disabled via build config 00:02:04.537 test-sad: explicitly disabled via build config 00:02:04.537 test-security-perf: explicitly disabled via build config 00:02:04.537 00:02:04.537 libs: 00:02:04.537 argparse: explicitly disabled via build config 00:02:04.537 metrics: explicitly disabled via build config 00:02:04.537 acl: explicitly disabled via build config 00:02:04.537 bbdev: explicitly disabled via build config 00:02:04.537 bitratestats: explicitly disabled via build config 00:02:04.537 bpf: explicitly disabled via build config 00:02:04.537 cfgfile: explicitly disabled via build config 00:02:04.537 distributor: explicitly disabled via build config 00:02:04.537 efd: explicitly disabled via build config 00:02:04.537 eventdev: explicitly disabled via build config 00:02:04.537 dispatcher: explicitly disabled via build config 00:02:04.537 gpudev: explicitly disabled via build config 00:02:04.537 gro: explicitly disabled via build config 00:02:04.537 gso: explicitly disabled via build config 00:02:04.537 ip_frag: explicitly disabled via build config 00:02:04.537 jobstats: explicitly disabled via build config 00:02:04.537 latencystats: explicitly disabled via build config 00:02:04.537 lpm: explicitly disabled via build config 00:02:04.537 member: explicitly disabled via build config 00:02:04.537 pcapng: explicitly disabled via build config 00:02:04.537 rawdev: explicitly disabled via build config 00:02:04.537 regexdev: explicitly disabled via build config 00:02:04.537 mldev: explicitly disabled via build config 00:02:04.537 rib: explicitly disabled via build config 00:02:04.537 sched: explicitly disabled via build config 00:02:04.537 stack: explicitly disabled via build config 00:02:04.537 ipsec: explicitly disabled via build config 00:02:04.537 pdcp: explicitly disabled via build config 00:02:04.537 fib: explicitly disabled via build config 00:02:04.537 port: explicitly disabled via build config 00:02:04.537 pdump: explicitly disabled via build config 00:02:04.537 table: explicitly disabled via build config 00:02:04.537 pipeline: explicitly disabled via build config 00:02:04.537 graph: explicitly disabled via build config 00:02:04.537 node: explicitly disabled via build config 00:02:04.537 00:02:04.537 drivers: 00:02:04.537 common/cpt: not in enabled drivers build config 00:02:04.537 common/dpaax: not in enabled drivers build config 00:02:04.537 common/iavf: not in enabled drivers build config 00:02:04.537 common/idpf: not in enabled drivers build config 00:02:04.537 common/ionic: not in enabled drivers build config 00:02:04.537 common/mvep: not in enabled drivers build config 00:02:04.537 common/octeontx: not in enabled drivers build config 00:02:04.537 bus/auxiliary: not in enabled drivers build config 00:02:04.537 bus/cdx: not in enabled drivers build config 00:02:04.537 bus/dpaa: not in enabled drivers build config 00:02:04.537 bus/fslmc: not in enabled drivers build config 00:02:04.537 bus/ifpga: not in enabled drivers build config 00:02:04.537 bus/platform: not in enabled drivers build config 00:02:04.537 bus/uacce: not in enabled drivers build config 00:02:04.537 bus/vmbus: not in enabled drivers build config 00:02:04.537 common/cnxk: not in enabled drivers build config 00:02:04.537 common/mlx5: not in enabled drivers build config 00:02:04.537 common/nfp: not in enabled drivers build config 00:02:04.537 common/nitrox: not in enabled drivers build config 00:02:04.537 common/qat: not in enabled drivers build config 00:02:04.537 common/sfc_efx: not in enabled drivers build config 00:02:04.537 mempool/bucket: not in enabled drivers build config 00:02:04.537 mempool/cnxk: not in enabled drivers build config 00:02:04.537 mempool/dpaa: not in enabled drivers build config 00:02:04.537 mempool/dpaa2: not in enabled drivers build config 00:02:04.537 mempool/octeontx: not in enabled drivers build config 00:02:04.537 mempool/stack: not in enabled drivers build config 00:02:04.537 dma/cnxk: not in enabled drivers build config 00:02:04.537 dma/dpaa: not in enabled drivers build config 00:02:04.537 dma/dpaa2: not in enabled drivers build config 00:02:04.537 dma/hisilicon: not in enabled drivers build config 00:02:04.537 dma/idxd: not in enabled drivers build config 00:02:04.537 dma/ioat: not in enabled drivers build config 00:02:04.537 dma/skeleton: not in enabled drivers build config 00:02:04.537 net/af_packet: not in enabled drivers build config 00:02:04.537 net/af_xdp: not in enabled drivers build config 00:02:04.537 net/ark: not in enabled drivers build config 00:02:04.537 net/atlantic: not in enabled drivers build config 00:02:04.537 net/avp: not in enabled drivers build config 00:02:04.537 net/axgbe: not in enabled drivers build config 00:02:04.537 net/bnx2x: not in enabled drivers build config 00:02:04.537 net/bnxt: not in enabled drivers build config 00:02:04.537 net/bonding: not in enabled drivers build config 00:02:04.537 net/cnxk: not in enabled drivers build config 00:02:04.537 net/cpfl: not in enabled drivers build config 00:02:04.537 net/cxgbe: not in enabled drivers build config 00:02:04.537 net/dpaa: not in enabled drivers build config 00:02:04.537 net/dpaa2: not in enabled drivers build config 00:02:04.537 net/e1000: not in enabled drivers build config 00:02:04.537 net/ena: not in enabled drivers build config 00:02:04.537 net/enetc: not in enabled drivers build config 00:02:04.537 net/enetfec: not in enabled drivers build config 00:02:04.537 net/enic: not in enabled drivers build config 00:02:04.537 net/failsafe: not in enabled drivers build config 00:02:04.537 net/fm10k: not in enabled drivers build config 00:02:04.537 net/gve: not in enabled drivers build config 00:02:04.537 net/hinic: not in enabled drivers build config 00:02:04.537 net/hns3: not in enabled drivers build config 00:02:04.537 net/i40e: not in enabled drivers build config 00:02:04.537 net/iavf: not in enabled drivers build config 00:02:04.537 net/ice: not in enabled drivers build config 00:02:04.537 net/idpf: not in enabled drivers build config 00:02:04.537 net/igc: not in enabled drivers build config 00:02:04.537 net/ionic: not in enabled drivers build config 00:02:04.537 net/ipn3ke: not in enabled drivers build config 00:02:04.537 net/ixgbe: not in enabled drivers build config 00:02:04.537 net/mana: not in enabled drivers build config 00:02:04.537 net/memif: not in enabled drivers build config 00:02:04.537 net/mlx4: not in enabled drivers build config 00:02:04.537 net/mlx5: not in enabled drivers build config 00:02:04.537 net/mvneta: not in enabled drivers build config 00:02:04.537 net/mvpp2: not in enabled drivers build config 00:02:04.537 net/netvsc: not in enabled drivers build config 00:02:04.537 net/nfb: not in enabled drivers build config 00:02:04.537 net/nfp: not in enabled drivers build config 00:02:04.537 net/ngbe: not in enabled drivers build config 00:02:04.537 net/null: not in enabled drivers build config 00:02:04.537 net/octeontx: not in enabled drivers build config 00:02:04.537 net/octeon_ep: not in enabled drivers build config 00:02:04.537 net/pcap: not in enabled drivers build config 00:02:04.537 net/pfe: not in enabled drivers build config 00:02:04.537 net/qede: not in enabled drivers build config 00:02:04.537 net/ring: not in enabled drivers build config 00:02:04.537 net/sfc: not in enabled drivers build config 00:02:04.537 net/softnic: not in enabled drivers build config 00:02:04.537 net/tap: not in enabled drivers build config 00:02:04.537 net/thunderx: not in enabled drivers build config 00:02:04.537 net/txgbe: not in enabled drivers build config 00:02:04.537 net/vdev_netvsc: not in enabled drivers build config 00:02:04.537 net/vhost: not in enabled drivers build config 00:02:04.537 net/virtio: not in enabled drivers build config 00:02:04.537 net/vmxnet3: not in enabled drivers build config 00:02:04.537 raw/*: missing internal dependency, "rawdev" 00:02:04.537 crypto/armv8: not in enabled drivers build config 00:02:04.537 crypto/bcmfs: not in enabled drivers build config 00:02:04.537 crypto/caam_jr: not in enabled drivers build config 00:02:04.537 crypto/ccp: not in enabled drivers build config 00:02:04.537 crypto/cnxk: not in enabled drivers build config 00:02:04.537 crypto/dpaa_sec: not in enabled drivers build config 00:02:04.537 crypto/dpaa2_sec: not in enabled drivers build config 00:02:04.537 crypto/ipsec_mb: not in enabled drivers build config 00:02:04.537 crypto/mlx5: not in enabled drivers build config 00:02:04.537 crypto/mvsam: not in enabled drivers build config 00:02:04.537 crypto/nitrox: not in enabled drivers build config 00:02:04.537 crypto/null: not in enabled drivers build config 00:02:04.537 crypto/octeontx: not in enabled drivers build config 00:02:04.537 crypto/openssl: not in enabled drivers build config 00:02:04.537 crypto/scheduler: not in enabled drivers build config 00:02:04.537 crypto/uadk: not in enabled drivers build config 00:02:04.537 crypto/virtio: not in enabled drivers build config 00:02:04.537 compress/isal: not in enabled drivers build config 00:02:04.537 compress/mlx5: not in enabled drivers build config 00:02:04.537 compress/nitrox: not in enabled drivers build config 00:02:04.537 compress/octeontx: not in enabled drivers build config 00:02:04.537 compress/zlib: not in enabled drivers build config 00:02:04.537 regex/*: missing internal dependency, "regexdev" 00:02:04.537 ml/*: missing internal dependency, "mldev" 00:02:04.537 vdpa/ifc: not in enabled drivers build config 00:02:04.537 vdpa/mlx5: not in enabled drivers build config 00:02:04.537 vdpa/nfp: not in enabled drivers build config 00:02:04.537 vdpa/sfc: not in enabled drivers build config 00:02:04.537 event/*: missing internal dependency, "eventdev" 00:02:04.537 baseband/*: missing internal dependency, "bbdev" 00:02:04.537 gpu/*: missing internal dependency, "gpudev" 00:02:04.537 00:02:04.537 00:02:04.537 Build targets in project: 85 00:02:04.537 00:02:04.537 DPDK 24.03.0 00:02:04.537 00:02:04.537 User defined options 00:02:04.538 buildtype : debug 00:02:04.538 default_library : shared 00:02:04.538 libdir : lib 00:02:04.538 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:04.538 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:04.538 c_link_args : 00:02:04.538 cpu_instruction_set: native 00:02:04.538 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:04.538 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:04.538 enable_docs : false 00:02:04.538 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:04.538 enable_kmods : false 00:02:04.538 max_lcores : 128 00:02:04.538 tests : false 00:02:04.538 00:02:04.538 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:04.538 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:04.538 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:04.538 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:04.538 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:04.538 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:04.538 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:04.538 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:04.538 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:04.538 [8/268] Linking static target lib/librte_kvargs.a 00:02:04.538 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:04.538 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:04.538 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:04.538 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:04.538 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:04.802 [14/268] Linking static target lib/librte_log.a 00:02:04.802 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:04.802 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:05.376 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.376 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.376 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:05.376 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.376 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.376 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:05.376 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:05.376 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.376 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:05.376 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:05.376 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:05.376 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:05.376 [29/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:05.376 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:05.639 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:05.639 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:05.639 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.639 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:05.639 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:05.639 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:05.639 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.639 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:05.639 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:05.639 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:05.639 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:05.639 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:05.639 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:05.639 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:05.639 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:05.639 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.639 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:05.639 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:05.639 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:05.639 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:05.639 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:05.639 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:05.639 [53/268] Linking static target lib/librte_telemetry.a 00:02:05.639 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:05.639 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:05.639 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:05.639 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:05.639 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:05.639 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:05.639 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:05.639 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:05.900 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:05.900 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:05.900 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:05.900 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.900 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:05.900 [67/268] Linking target lib/librte_log.so.24.1 00:02:06.162 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:06.162 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.162 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:06.162 [71/268] Linking static target lib/librte_pci.a 00:02:06.162 [72/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:06.162 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:06.162 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:06.425 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:06.425 [76/268] Linking target lib/librte_kvargs.so.24.1 00:02:06.425 [77/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:06.425 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.425 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:06.425 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.425 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:06.425 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.425 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.425 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:06.425 [85/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:06.425 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:06.425 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:06.425 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.425 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:06.425 [90/268] Linking static target lib/librte_ring.a 00:02:06.425 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:06.425 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.687 [93/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:06.687 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:06.687 [95/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:06.687 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:06.687 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:06.687 [98/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:06.687 [99/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:06.687 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:06.687 [101/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.687 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:06.687 [103/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.687 [104/268] Linking static target lib/librte_meter.a 00:02:06.687 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:06.687 [106/268] Linking target lib/librte_telemetry.so.24.1 00:02:06.687 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:06.687 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:06.687 [109/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:06.687 [110/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:06.687 [111/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:06.687 [112/268] Linking static target lib/librte_rcu.a 00:02:06.687 [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:06.687 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:06.687 [115/268] Linking static target lib/librte_mempool.a 00:02:06.687 [116/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:06.687 [117/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.687 [118/268] Linking static target lib/librte_eal.a 00:02:06.951 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:06.951 [120/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:06.951 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:06.951 [122/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:06.951 [123/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:06.951 [124/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:06.951 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:06.951 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:06.951 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:06.951 [128/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:06.951 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:06.951 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:06.951 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:06.951 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:07.216 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:07.216 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:07.216 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.216 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:07.216 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.216 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:07.216 [139/268] Linking static target lib/librte_net.a 00:02:07.216 [140/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.216 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:07.216 [142/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.216 [143/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:07.476 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:07.476 [145/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:07.476 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:07.476 [147/268] Linking static target lib/librte_cmdline.a 00:02:07.476 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:07.476 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:07.476 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:07.476 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:07.476 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:07.477 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:07.477 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:07.477 [155/268] Linking static target lib/librte_timer.a 00:02:07.736 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:07.736 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:07.736 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.736 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:07.736 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:07.736 [161/268] Linking static target lib/librte_dmadev.a 00:02:07.736 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:07.736 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:07.736 [164/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.995 [165/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:07.995 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:07.995 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:07.995 [168/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:07.995 [169/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:07.995 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:07.995 [171/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.995 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:07.995 [173/268] Linking static target lib/librte_power.a 00:02:07.995 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:07.995 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:07.995 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:07.995 [177/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:07.995 [178/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:07.995 [179/268] Linking static target lib/librte_compressdev.a 00:02:07.996 [180/268] Linking static target lib/librte_hash.a 00:02:07.996 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:08.254 [182/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:08.254 [183/268] Linking static target lib/librte_mbuf.a 00:02:08.254 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:08.254 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:08.254 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:08.254 [187/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.254 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:08.254 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:08.254 [190/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:08.254 [191/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:08.254 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:08.254 [193/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.254 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:08.513 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:08.513 [196/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:08.513 [197/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:08.513 [198/268] Linking static target lib/librte_security.a 00:02:08.513 [199/268] Linking static target lib/librte_reorder.a 00:02:08.513 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:08.513 [201/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:08.513 [202/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:08.513 [203/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:08.513 [204/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:08.513 [205/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:08.513 [206/268] Linking static target drivers/librte_bus_vdev.a 00:02:08.513 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.513 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.513 [209/268] Linking static target drivers/librte_bus_pci.a 00:02:08.513 [210/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.513 [211/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.513 [212/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:08.513 [213/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.772 [214/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.772 [215/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:08.772 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.772 [217/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.772 [218/268] Linking static target drivers/librte_mempool_ring.a 00:02:08.772 [219/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.772 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.772 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.772 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:08.772 [223/268] Linking static target lib/librte_cryptodev.a 00:02:09.030 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.031 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:09.031 [226/268] Linking static target lib/librte_ethdev.a 00:02:09.966 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.340 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:13.241 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.241 [230/268] Linking target lib/librte_eal.so.24.1 00:02:13.241 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:13.241 [232/268] Linking target lib/librte_timer.so.24.1 00:02:13.241 [233/268] Linking target lib/librte_dmadev.so.24.1 00:02:13.241 [234/268] Linking target lib/librte_meter.so.24.1 00:02:13.241 [235/268] Linking target lib/librte_pci.so.24.1 00:02:13.241 [236/268] Linking target lib/librte_ring.so.24.1 00:02:13.241 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:13.241 [238/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.499 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:13.499 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:13.499 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:13.500 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:13.500 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:13.500 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:13.500 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:13.500 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:13.758 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:13.758 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:13.758 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:13.758 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:13.758 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:13.758 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:13.758 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:13.758 [254/268] Linking target lib/librte_net.so.24.1 00:02:13.758 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:14.084 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:14.084 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:14.084 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:14.084 [259/268] Linking target lib/librte_hash.so.24.1 00:02:14.084 [260/268] Linking target lib/librte_security.so.24.1 00:02:14.084 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:14.084 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:14.084 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:14.342 [264/268] Linking target lib/librte_power.so.24.1 00:02:17.623 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:17.623 [266/268] Linking static target lib/librte_vhost.a 00:02:18.188 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.188 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:18.188 INFO: autodetecting backend as ninja 00:02:18.188 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:40.201 CC lib/ut_mock/mock.o 00:02:40.201 CC lib/log/log.o 00:02:40.201 CC lib/log/log_flags.o 00:02:40.201 CC lib/log/log_deprecated.o 00:02:40.201 CC lib/ut/ut.o 00:02:40.201 LIB libspdk_ut.a 00:02:40.201 LIB libspdk_ut_mock.a 00:02:40.201 LIB libspdk_log.a 00:02:40.201 SO libspdk_ut.so.2.0 00:02:40.201 SO libspdk_ut_mock.so.6.0 00:02:40.201 SO libspdk_log.so.7.1 00:02:40.201 SYMLINK libspdk_ut.so 00:02:40.201 SYMLINK libspdk_ut_mock.so 00:02:40.201 SYMLINK libspdk_log.so 00:02:40.201 CC lib/ioat/ioat.o 00:02:40.201 CC lib/dma/dma.o 00:02:40.201 CXX lib/trace_parser/trace.o 00:02:40.201 CC lib/util/base64.o 00:02:40.201 CC lib/util/bit_array.o 00:02:40.201 CC lib/util/cpuset.o 00:02:40.201 CC lib/util/crc16.o 00:02:40.201 CC lib/util/crc32.o 00:02:40.201 CC lib/util/crc32c.o 00:02:40.201 CC lib/util/crc32_ieee.o 00:02:40.201 CC lib/util/crc64.o 00:02:40.201 CC lib/util/dif.o 00:02:40.201 CC lib/util/fd.o 00:02:40.201 CC lib/util/fd_group.o 00:02:40.201 CC lib/util/file.o 00:02:40.201 CC lib/util/hexlify.o 00:02:40.201 CC lib/util/iov.o 00:02:40.201 CC lib/util/math.o 00:02:40.201 CC lib/util/net.o 00:02:40.201 CC lib/util/pipe.o 00:02:40.201 CC lib/util/strerror_tls.o 00:02:40.201 CC lib/util/string.o 00:02:40.201 CC lib/util/uuid.o 00:02:40.201 CC lib/util/xor.o 00:02:40.201 CC lib/util/zipf.o 00:02:40.201 CC lib/util/md5.o 00:02:40.201 CC lib/vfio_user/host/vfio_user_pci.o 00:02:40.201 CC lib/vfio_user/host/vfio_user.o 00:02:40.201 LIB libspdk_dma.a 00:02:40.201 SO libspdk_dma.so.5.0 00:02:40.201 LIB libspdk_ioat.a 00:02:40.201 SO libspdk_ioat.so.7.0 00:02:40.201 SYMLINK libspdk_dma.so 00:02:40.201 SYMLINK libspdk_ioat.so 00:02:40.201 LIB libspdk_vfio_user.a 00:02:40.201 SO libspdk_vfio_user.so.5.0 00:02:40.201 SYMLINK libspdk_vfio_user.so 00:02:40.201 LIB libspdk_util.a 00:02:40.201 SO libspdk_util.so.10.1 00:02:40.201 SYMLINK libspdk_util.so 00:02:40.201 CC lib/conf/conf.o 00:02:40.201 CC lib/rdma_utils/rdma_utils.o 00:02:40.201 CC lib/vmd/vmd.o 00:02:40.201 CC lib/idxd/idxd.o 00:02:40.201 CC lib/json/json_parse.o 00:02:40.201 CC lib/vmd/led.o 00:02:40.201 CC lib/env_dpdk/env.o 00:02:40.201 CC lib/idxd/idxd_user.o 00:02:40.201 CC lib/env_dpdk/memory.o 00:02:40.201 CC lib/json/json_util.o 00:02:40.201 CC lib/env_dpdk/pci.o 00:02:40.201 CC lib/idxd/idxd_kernel.o 00:02:40.201 CC lib/json/json_write.o 00:02:40.201 CC lib/env_dpdk/init.o 00:02:40.201 CC lib/env_dpdk/threads.o 00:02:40.201 CC lib/env_dpdk/pci_ioat.o 00:02:40.201 CC lib/env_dpdk/pci_virtio.o 00:02:40.201 CC lib/env_dpdk/pci_vmd.o 00:02:40.201 CC lib/env_dpdk/pci_idxd.o 00:02:40.201 CC lib/env_dpdk/pci_event.o 00:02:40.201 CC lib/env_dpdk/sigbus_handler.o 00:02:40.201 CC lib/env_dpdk/pci_dpdk.o 00:02:40.201 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:40.201 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:40.201 LIB libspdk_trace_parser.a 00:02:40.201 SO libspdk_trace_parser.so.6.0 00:02:40.201 SYMLINK libspdk_trace_parser.so 00:02:40.201 LIB libspdk_conf.a 00:02:40.201 SO libspdk_conf.so.6.0 00:02:40.201 LIB libspdk_rdma_utils.a 00:02:40.201 SYMLINK libspdk_conf.so 00:02:40.201 SO libspdk_rdma_utils.so.1.0 00:02:40.201 LIB libspdk_json.a 00:02:40.201 SO libspdk_json.so.6.0 00:02:40.201 SYMLINK libspdk_rdma_utils.so 00:02:40.201 SYMLINK libspdk_json.so 00:02:40.201 CC lib/rdma_provider/common.o 00:02:40.201 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:40.201 CC lib/jsonrpc/jsonrpc_server.o 00:02:40.201 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:40.201 CC lib/jsonrpc/jsonrpc_client.o 00:02:40.201 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:40.201 LIB libspdk_idxd.a 00:02:40.201 SO libspdk_idxd.so.12.1 00:02:40.201 LIB libspdk_vmd.a 00:02:40.459 SO libspdk_vmd.so.6.0 00:02:40.459 SYMLINK libspdk_idxd.so 00:02:40.459 SYMLINK libspdk_vmd.so 00:02:40.459 LIB libspdk_rdma_provider.a 00:02:40.459 SO libspdk_rdma_provider.so.7.0 00:02:40.459 SYMLINK libspdk_rdma_provider.so 00:02:40.459 LIB libspdk_jsonrpc.a 00:02:40.459 SO libspdk_jsonrpc.so.6.0 00:02:40.717 SYMLINK libspdk_jsonrpc.so 00:02:40.717 CC lib/rpc/rpc.o 00:02:40.976 LIB libspdk_rpc.a 00:02:40.976 SO libspdk_rpc.so.6.0 00:02:40.976 SYMLINK libspdk_rpc.so 00:02:41.234 CC lib/trace/trace.o 00:02:41.234 CC lib/trace/trace_flags.o 00:02:41.234 CC lib/trace/trace_rpc.o 00:02:41.234 CC lib/keyring/keyring.o 00:02:41.234 CC lib/notify/notify.o 00:02:41.234 CC lib/notify/notify_rpc.o 00:02:41.234 CC lib/keyring/keyring_rpc.o 00:02:41.492 LIB libspdk_notify.a 00:02:41.492 SO libspdk_notify.so.6.0 00:02:41.492 SYMLINK libspdk_notify.so 00:02:41.492 LIB libspdk_keyring.a 00:02:41.492 LIB libspdk_trace.a 00:02:41.492 SO libspdk_keyring.so.2.0 00:02:41.492 SO libspdk_trace.so.11.0 00:02:41.492 SYMLINK libspdk_keyring.so 00:02:41.492 SYMLINK libspdk_trace.so 00:02:41.750 LIB libspdk_env_dpdk.a 00:02:41.750 CC lib/thread/thread.o 00:02:41.750 CC lib/thread/iobuf.o 00:02:41.750 CC lib/sock/sock.o 00:02:41.750 CC lib/sock/sock_rpc.o 00:02:41.750 SO libspdk_env_dpdk.so.15.1 00:02:42.007 SYMLINK libspdk_env_dpdk.so 00:02:42.007 LIB libspdk_sock.a 00:02:42.265 SO libspdk_sock.so.10.0 00:02:42.265 SYMLINK libspdk_sock.so 00:02:42.265 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:42.265 CC lib/nvme/nvme_ctrlr.o 00:02:42.265 CC lib/nvme/nvme_fabric.o 00:02:42.265 CC lib/nvme/nvme_ns_cmd.o 00:02:42.265 CC lib/nvme/nvme_ns.o 00:02:42.265 CC lib/nvme/nvme_pcie_common.o 00:02:42.265 CC lib/nvme/nvme_pcie.o 00:02:42.265 CC lib/nvme/nvme_qpair.o 00:02:42.265 CC lib/nvme/nvme.o 00:02:42.265 CC lib/nvme/nvme_quirks.o 00:02:42.265 CC lib/nvme/nvme_transport.o 00:02:42.265 CC lib/nvme/nvme_discovery.o 00:02:42.265 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:42.265 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:42.265 CC lib/nvme/nvme_tcp.o 00:02:42.265 CC lib/nvme/nvme_opal.o 00:02:42.265 CC lib/nvme/nvme_io_msg.o 00:02:42.265 CC lib/nvme/nvme_poll_group.o 00:02:42.265 CC lib/nvme/nvme_zns.o 00:02:42.265 CC lib/nvme/nvme_stubs.o 00:02:42.265 CC lib/nvme/nvme_auth.o 00:02:42.265 CC lib/nvme/nvme_cuse.o 00:02:42.265 CC lib/nvme/nvme_vfio_user.o 00:02:42.265 CC lib/nvme/nvme_rdma.o 00:02:43.200 LIB libspdk_thread.a 00:02:43.458 SO libspdk_thread.so.11.0 00:02:43.458 SYMLINK libspdk_thread.so 00:02:43.458 CC lib/init/json_config.o 00:02:43.458 CC lib/blob/blobstore.o 00:02:43.458 CC lib/vfu_tgt/tgt_endpoint.o 00:02:43.458 CC lib/virtio/virtio.o 00:02:43.458 CC lib/init/subsystem.o 00:02:43.458 CC lib/accel/accel.o 00:02:43.458 CC lib/fsdev/fsdev.o 00:02:43.458 CC lib/vfu_tgt/tgt_rpc.o 00:02:43.458 CC lib/virtio/virtio_vhost_user.o 00:02:43.458 CC lib/blob/zeroes.o 00:02:43.458 CC lib/init/subsystem_rpc.o 00:02:43.458 CC lib/fsdev/fsdev_io.o 00:02:43.458 CC lib/blob/request.o 00:02:43.458 CC lib/virtio/virtio_vfio_user.o 00:02:43.458 CC lib/accel/accel_rpc.o 00:02:43.458 CC lib/fsdev/fsdev_rpc.o 00:02:43.458 CC lib/init/rpc.o 00:02:43.458 CC lib/accel/accel_sw.o 00:02:43.458 CC lib/virtio/virtio_pci.o 00:02:43.458 CC lib/blob/blob_bs_dev.o 00:02:44.025 LIB libspdk_init.a 00:02:44.025 SO libspdk_init.so.6.0 00:02:44.025 LIB libspdk_virtio.a 00:02:44.025 LIB libspdk_vfu_tgt.a 00:02:44.025 SYMLINK libspdk_init.so 00:02:44.025 SO libspdk_vfu_tgt.so.3.0 00:02:44.025 SO libspdk_virtio.so.7.0 00:02:44.025 SYMLINK libspdk_vfu_tgt.so 00:02:44.025 SYMLINK libspdk_virtio.so 00:02:44.025 CC lib/event/app.o 00:02:44.025 CC lib/event/reactor.o 00:02:44.025 CC lib/event/log_rpc.o 00:02:44.025 CC lib/event/app_rpc.o 00:02:44.025 CC lib/event/scheduler_static.o 00:02:44.283 LIB libspdk_fsdev.a 00:02:44.283 SO libspdk_fsdev.so.2.0 00:02:44.283 SYMLINK libspdk_fsdev.so 00:02:44.541 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:44.541 LIB libspdk_event.a 00:02:44.541 SO libspdk_event.so.14.0 00:02:44.541 SYMLINK libspdk_event.so 00:02:44.799 LIB libspdk_accel.a 00:02:44.799 SO libspdk_accel.so.16.0 00:02:44.799 SYMLINK libspdk_accel.so 00:02:44.799 LIB libspdk_nvme.a 00:02:45.057 CC lib/bdev/bdev.o 00:02:45.057 CC lib/bdev/bdev_rpc.o 00:02:45.057 CC lib/bdev/bdev_zone.o 00:02:45.057 CC lib/bdev/part.o 00:02:45.057 CC lib/bdev/scsi_nvme.o 00:02:45.057 SO libspdk_nvme.so.15.0 00:02:45.057 LIB libspdk_fuse_dispatcher.a 00:02:45.314 SO libspdk_fuse_dispatcher.so.1.0 00:02:45.314 SYMLINK libspdk_nvme.so 00:02:45.314 SYMLINK libspdk_fuse_dispatcher.so 00:02:46.694 LIB libspdk_blob.a 00:02:46.694 SO libspdk_blob.so.12.0 00:02:46.952 SYMLINK libspdk_blob.so 00:02:46.952 CC lib/blobfs/blobfs.o 00:02:46.952 CC lib/blobfs/tree.o 00:02:46.952 CC lib/lvol/lvol.o 00:02:47.518 LIB libspdk_bdev.a 00:02:47.775 SO libspdk_bdev.so.17.0 00:02:47.775 SYMLINK libspdk_bdev.so 00:02:47.775 LIB libspdk_blobfs.a 00:02:47.775 SO libspdk_blobfs.so.11.0 00:02:48.040 CC lib/nbd/nbd.o 00:02:48.040 CC lib/ublk/ublk.o 00:02:48.040 CC lib/nbd/nbd_rpc.o 00:02:48.040 CC lib/nvmf/ctrlr.o 00:02:48.040 CC lib/ftl/ftl_core.o 00:02:48.040 CC lib/ublk/ublk_rpc.o 00:02:48.040 CC lib/scsi/dev.o 00:02:48.040 CC lib/nvmf/ctrlr_discovery.o 00:02:48.040 CC lib/ftl/ftl_init.o 00:02:48.040 CC lib/ftl/ftl_layout.o 00:02:48.040 CC lib/scsi/lun.o 00:02:48.040 CC lib/nvmf/ctrlr_bdev.o 00:02:48.040 CC lib/scsi/port.o 00:02:48.040 CC lib/ftl/ftl_debug.o 00:02:48.040 CC lib/nvmf/subsystem.o 00:02:48.040 CC lib/ftl/ftl_io.o 00:02:48.040 CC lib/scsi/scsi.o 00:02:48.040 CC lib/nvmf/nvmf.o 00:02:48.040 CC lib/scsi/scsi_bdev.o 00:02:48.040 CC lib/ftl/ftl_sb.o 00:02:48.040 CC lib/nvmf/nvmf_rpc.o 00:02:48.040 CC lib/scsi/scsi_pr.o 00:02:48.040 CC lib/nvmf/transport.o 00:02:48.040 CC lib/ftl/ftl_l2p.o 00:02:48.040 CC lib/scsi/scsi_rpc.o 00:02:48.040 CC lib/ftl/ftl_l2p_flat.o 00:02:48.040 CC lib/nvmf/tcp.o 00:02:48.040 CC lib/scsi/task.o 00:02:48.040 CC lib/ftl/ftl_nv_cache.o 00:02:48.040 CC lib/nvmf/stubs.o 00:02:48.040 CC lib/ftl/ftl_band.o 00:02:48.040 CC lib/nvmf/mdns_server.o 00:02:48.040 CC lib/nvmf/vfio_user.o 00:02:48.040 CC lib/ftl/ftl_band_ops.o 00:02:48.040 CC lib/ftl/ftl_writer.o 00:02:48.040 CC lib/nvmf/rdma.o 00:02:48.040 CC lib/nvmf/auth.o 00:02:48.040 CC lib/ftl/ftl_rq.o 00:02:48.040 CC lib/ftl/ftl_reloc.o 00:02:48.040 CC lib/ftl/ftl_l2p_cache.o 00:02:48.040 CC lib/ftl/ftl_p2l.o 00:02:48.040 CC lib/ftl/ftl_p2l_log.o 00:02:48.040 CC lib/ftl/mngt/ftl_mngt.o 00:02:48.040 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:48.040 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:48.040 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:48.040 SYMLINK libspdk_blobfs.so 00:02:48.040 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:48.040 LIB libspdk_lvol.a 00:02:48.040 SO libspdk_lvol.so.11.0 00:02:48.040 SYMLINK libspdk_lvol.so 00:02:48.299 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:48.299 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:48.299 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:48.299 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:48.299 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:48.299 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:48.299 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:48.299 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:48.299 CC lib/ftl/utils/ftl_conf.o 00:02:48.299 CC lib/ftl/utils/ftl_md.o 00:02:48.299 CC lib/ftl/utils/ftl_mempool.o 00:02:48.299 CC lib/ftl/utils/ftl_bitmap.o 00:02:48.299 CC lib/ftl/utils/ftl_property.o 00:02:48.299 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:48.563 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:48.564 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:48.564 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:48.564 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:48.564 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:48.564 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:48.564 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:48.564 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:48.564 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:48.564 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:48.564 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:48.564 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:48.564 CC lib/ftl/base/ftl_base_dev.o 00:02:48.821 CC lib/ftl/base/ftl_base_bdev.o 00:02:48.821 CC lib/ftl/ftl_trace.o 00:02:48.821 LIB libspdk_nbd.a 00:02:48.821 SO libspdk_nbd.so.7.0 00:02:48.821 LIB libspdk_scsi.a 00:02:48.821 SYMLINK libspdk_nbd.so 00:02:48.821 SO libspdk_scsi.so.9.0 00:02:49.079 SYMLINK libspdk_scsi.so 00:02:49.079 LIB libspdk_ublk.a 00:02:49.079 SO libspdk_ublk.so.3.0 00:02:49.079 SYMLINK libspdk_ublk.so 00:02:49.079 CC lib/vhost/vhost.o 00:02:49.079 CC lib/iscsi/conn.o 00:02:49.079 CC lib/iscsi/init_grp.o 00:02:49.079 CC lib/vhost/vhost_rpc.o 00:02:49.079 CC lib/vhost/vhost_scsi.o 00:02:49.079 CC lib/iscsi/iscsi.o 00:02:49.079 CC lib/iscsi/param.o 00:02:49.079 CC lib/vhost/vhost_blk.o 00:02:49.079 CC lib/iscsi/portal_grp.o 00:02:49.079 CC lib/vhost/rte_vhost_user.o 00:02:49.079 CC lib/iscsi/tgt_node.o 00:02:49.079 CC lib/iscsi/iscsi_subsystem.o 00:02:49.079 CC lib/iscsi/iscsi_rpc.o 00:02:49.079 CC lib/iscsi/task.o 00:02:49.337 LIB libspdk_ftl.a 00:02:49.594 SO libspdk_ftl.so.9.0 00:02:49.852 SYMLINK libspdk_ftl.so 00:02:50.418 LIB libspdk_vhost.a 00:02:50.418 SO libspdk_vhost.so.8.0 00:02:50.418 SYMLINK libspdk_vhost.so 00:02:50.676 LIB libspdk_iscsi.a 00:02:50.676 LIB libspdk_nvmf.a 00:02:50.676 SO libspdk_iscsi.so.8.0 00:02:50.676 SO libspdk_nvmf.so.20.0 00:02:50.942 SYMLINK libspdk_iscsi.so 00:02:50.942 SYMLINK libspdk_nvmf.so 00:02:51.202 CC module/env_dpdk/env_dpdk_rpc.o 00:02:51.202 CC module/vfu_device/vfu_virtio.o 00:02:51.202 CC module/vfu_device/vfu_virtio_blk.o 00:02:51.202 CC module/vfu_device/vfu_virtio_scsi.o 00:02:51.202 CC module/vfu_device/vfu_virtio_rpc.o 00:02:51.202 CC module/vfu_device/vfu_virtio_fs.o 00:02:51.202 CC module/sock/posix/posix.o 00:02:51.202 CC module/scheduler/gscheduler/gscheduler.o 00:02:51.202 CC module/fsdev/aio/fsdev_aio.o 00:02:51.202 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:51.202 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:51.202 CC module/blob/bdev/blob_bdev.o 00:02:51.202 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:51.202 CC module/accel/dsa/accel_dsa.o 00:02:51.202 CC module/fsdev/aio/linux_aio_mgr.o 00:02:51.202 CC module/keyring/file/keyring.o 00:02:51.202 CC module/keyring/linux/keyring.o 00:02:51.202 CC module/accel/iaa/accel_iaa.o 00:02:51.202 CC module/accel/dsa/accel_dsa_rpc.o 00:02:51.202 CC module/keyring/linux/keyring_rpc.o 00:02:51.202 CC module/keyring/file/keyring_rpc.o 00:02:51.202 CC module/accel/iaa/accel_iaa_rpc.o 00:02:51.202 CC module/accel/ioat/accel_ioat.o 00:02:51.202 CC module/accel/ioat/accel_ioat_rpc.o 00:02:51.202 CC module/accel/error/accel_error.o 00:02:51.202 CC module/accel/error/accel_error_rpc.o 00:02:51.460 LIB libspdk_env_dpdk_rpc.a 00:02:51.460 SO libspdk_env_dpdk_rpc.so.6.0 00:02:51.460 LIB libspdk_keyring_linux.a 00:02:51.460 LIB libspdk_scheduler_gscheduler.a 00:02:51.460 LIB libspdk_keyring_file.a 00:02:51.460 LIB libspdk_scheduler_dpdk_governor.a 00:02:51.460 SO libspdk_keyring_linux.so.1.0 00:02:51.460 SO libspdk_scheduler_gscheduler.so.4.0 00:02:51.460 SYMLINK libspdk_env_dpdk_rpc.so 00:02:51.460 SO libspdk_keyring_file.so.2.0 00:02:51.460 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:51.461 LIB libspdk_accel_ioat.a 00:02:51.461 LIB libspdk_scheduler_dynamic.a 00:02:51.461 SYMLINK libspdk_scheduler_gscheduler.so 00:02:51.461 LIB libspdk_accel_iaa.a 00:02:51.461 SYMLINK libspdk_keyring_linux.so 00:02:51.461 LIB libspdk_accel_error.a 00:02:51.461 SO libspdk_accel_ioat.so.6.0 00:02:51.461 SO libspdk_scheduler_dynamic.so.4.0 00:02:51.461 SYMLINK libspdk_keyring_file.so 00:02:51.461 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:51.461 SO libspdk_accel_iaa.so.3.0 00:02:51.461 SO libspdk_accel_error.so.2.0 00:02:51.461 SYMLINK libspdk_accel_ioat.so 00:02:51.461 SYMLINK libspdk_scheduler_dynamic.so 00:02:51.461 LIB libspdk_blob_bdev.a 00:02:51.461 LIB libspdk_accel_dsa.a 00:02:51.461 SYMLINK libspdk_accel_error.so 00:02:51.461 SYMLINK libspdk_accel_iaa.so 00:02:51.719 SO libspdk_blob_bdev.so.12.0 00:02:51.719 SO libspdk_accel_dsa.so.5.0 00:02:51.719 SYMLINK libspdk_blob_bdev.so 00:02:51.719 SYMLINK libspdk_accel_dsa.so 00:02:51.719 LIB libspdk_vfu_device.a 00:02:51.983 SO libspdk_vfu_device.so.3.0 00:02:51.983 CC module/bdev/nvme/bdev_nvme.o 00:02:51.983 CC module/bdev/lvol/vbdev_lvol.o 00:02:51.983 CC module/bdev/null/bdev_null.o 00:02:51.983 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:51.983 CC module/blobfs/bdev/blobfs_bdev.o 00:02:51.983 CC module/bdev/delay/vbdev_delay.o 00:02:51.983 CC module/bdev/malloc/bdev_malloc.o 00:02:51.983 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:51.983 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:51.983 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:51.983 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:51.983 CC module/bdev/gpt/gpt.o 00:02:51.983 CC module/bdev/null/bdev_null_rpc.o 00:02:51.983 CC module/bdev/error/vbdev_error.o 00:02:51.983 CC module/bdev/gpt/vbdev_gpt.o 00:02:51.983 CC module/bdev/nvme/nvme_rpc.o 00:02:51.983 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:51.983 CC module/bdev/error/vbdev_error_rpc.o 00:02:51.983 CC module/bdev/nvme/bdev_mdns_client.o 00:02:51.983 CC module/bdev/ftl/bdev_ftl.o 00:02:51.983 CC module/bdev/nvme/vbdev_opal.o 00:02:51.983 CC module/bdev/raid/bdev_raid.o 00:02:51.983 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:51.983 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:51.983 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:51.983 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:51.983 CC module/bdev/raid/bdev_raid_rpc.o 00:02:51.983 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:51.983 CC module/bdev/iscsi/bdev_iscsi.o 00:02:51.983 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:51.983 CC module/bdev/passthru/vbdev_passthru.o 00:02:51.983 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:51.983 CC module/bdev/raid/bdev_raid_sb.o 00:02:51.983 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:51.983 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:51.983 CC module/bdev/aio/bdev_aio.o 00:02:51.983 CC module/bdev/raid/raid0.o 00:02:51.983 CC module/bdev/split/vbdev_split.o 00:02:51.983 CC module/bdev/raid/raid1.o 00:02:51.983 CC module/bdev/split/vbdev_split_rpc.o 00:02:51.983 CC module/bdev/aio/bdev_aio_rpc.o 00:02:51.983 CC module/bdev/raid/concat.o 00:02:51.983 SYMLINK libspdk_vfu_device.so 00:02:51.983 LIB libspdk_fsdev_aio.a 00:02:52.242 SO libspdk_fsdev_aio.so.1.0 00:02:52.242 LIB libspdk_sock_posix.a 00:02:52.242 SO libspdk_sock_posix.so.6.0 00:02:52.242 SYMLINK libspdk_fsdev_aio.so 00:02:52.242 LIB libspdk_bdev_split.a 00:02:52.242 LIB libspdk_blobfs_bdev.a 00:02:52.242 SO libspdk_bdev_split.so.6.0 00:02:52.242 SO libspdk_blobfs_bdev.so.6.0 00:02:52.242 SYMLINK libspdk_sock_posix.so 00:02:52.500 SYMLINK libspdk_bdev_split.so 00:02:52.500 SYMLINK libspdk_blobfs_bdev.so 00:02:52.500 LIB libspdk_bdev_null.a 00:02:52.500 LIB libspdk_bdev_error.a 00:02:52.500 SO libspdk_bdev_null.so.6.0 00:02:52.500 LIB libspdk_bdev_gpt.a 00:02:52.500 SO libspdk_bdev_error.so.6.0 00:02:52.500 SO libspdk_bdev_gpt.so.6.0 00:02:52.500 LIB libspdk_bdev_aio.a 00:02:52.500 LIB libspdk_bdev_passthru.a 00:02:52.500 LIB libspdk_bdev_ftl.a 00:02:52.500 SYMLINK libspdk_bdev_null.so 00:02:52.500 SYMLINK libspdk_bdev_error.so 00:02:52.500 SO libspdk_bdev_aio.so.6.0 00:02:52.500 SO libspdk_bdev_passthru.so.6.0 00:02:52.500 SO libspdk_bdev_ftl.so.6.0 00:02:52.500 SYMLINK libspdk_bdev_gpt.so 00:02:52.500 LIB libspdk_bdev_zone_block.a 00:02:52.500 SYMLINK libspdk_bdev_passthru.so 00:02:52.500 SYMLINK libspdk_bdev_aio.so 00:02:52.500 SYMLINK libspdk_bdev_ftl.so 00:02:52.500 LIB libspdk_bdev_iscsi.a 00:02:52.500 SO libspdk_bdev_zone_block.so.6.0 00:02:52.500 LIB libspdk_bdev_malloc.a 00:02:52.500 SO libspdk_bdev_iscsi.so.6.0 00:02:52.500 SO libspdk_bdev_malloc.so.6.0 00:02:52.500 LIB libspdk_bdev_delay.a 00:02:52.500 SYMLINK libspdk_bdev_zone_block.so 00:02:52.500 SYMLINK libspdk_bdev_iscsi.so 00:02:52.758 LIB libspdk_bdev_virtio.a 00:02:52.758 SO libspdk_bdev_delay.so.6.0 00:02:52.758 SYMLINK libspdk_bdev_malloc.so 00:02:52.758 SO libspdk_bdev_virtio.so.6.0 00:02:52.758 SYMLINK libspdk_bdev_delay.so 00:02:52.758 LIB libspdk_bdev_lvol.a 00:02:52.758 SYMLINK libspdk_bdev_virtio.so 00:02:52.758 SO libspdk_bdev_lvol.so.6.0 00:02:52.758 SYMLINK libspdk_bdev_lvol.so 00:02:53.016 LIB libspdk_bdev_raid.a 00:02:53.274 SO libspdk_bdev_raid.so.6.0 00:02:53.274 SYMLINK libspdk_bdev_raid.so 00:02:54.647 LIB libspdk_bdev_nvme.a 00:02:54.647 SO libspdk_bdev_nvme.so.7.1 00:02:54.905 SYMLINK libspdk_bdev_nvme.so 00:02:55.163 CC module/event/subsystems/iobuf/iobuf.o 00:02:55.163 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:55.163 CC module/event/subsystems/scheduler/scheduler.o 00:02:55.163 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:55.163 CC module/event/subsystems/vmd/vmd.o 00:02:55.163 CC module/event/subsystems/fsdev/fsdev.o 00:02:55.163 CC module/event/subsystems/sock/sock.o 00:02:55.163 CC module/event/subsystems/keyring/keyring.o 00:02:55.163 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:55.163 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:55.421 LIB libspdk_event_keyring.a 00:02:55.421 LIB libspdk_event_vhost_blk.a 00:02:55.421 LIB libspdk_event_scheduler.a 00:02:55.421 LIB libspdk_event_fsdev.a 00:02:55.421 LIB libspdk_event_vmd.a 00:02:55.421 LIB libspdk_event_vfu_tgt.a 00:02:55.421 LIB libspdk_event_sock.a 00:02:55.421 LIB libspdk_event_iobuf.a 00:02:55.421 SO libspdk_event_keyring.so.1.0 00:02:55.421 SO libspdk_event_scheduler.so.4.0 00:02:55.421 SO libspdk_event_vhost_blk.so.3.0 00:02:55.421 SO libspdk_event_fsdev.so.1.0 00:02:55.421 SO libspdk_event_vfu_tgt.so.3.0 00:02:55.421 SO libspdk_event_vmd.so.6.0 00:02:55.421 SO libspdk_event_sock.so.5.0 00:02:55.421 SO libspdk_event_iobuf.so.3.0 00:02:55.421 SYMLINK libspdk_event_keyring.so 00:02:55.421 SYMLINK libspdk_event_scheduler.so 00:02:55.421 SYMLINK libspdk_event_vhost_blk.so 00:02:55.421 SYMLINK libspdk_event_fsdev.so 00:02:55.421 SYMLINK libspdk_event_vfu_tgt.so 00:02:55.421 SYMLINK libspdk_event_sock.so 00:02:55.421 SYMLINK libspdk_event_vmd.so 00:02:55.421 SYMLINK libspdk_event_iobuf.so 00:02:55.681 CC module/event/subsystems/accel/accel.o 00:02:55.681 LIB libspdk_event_accel.a 00:02:55.681 SO libspdk_event_accel.so.6.0 00:02:55.941 SYMLINK libspdk_event_accel.so 00:02:55.941 CC module/event/subsystems/bdev/bdev.o 00:02:56.199 LIB libspdk_event_bdev.a 00:02:56.199 SO libspdk_event_bdev.so.6.0 00:02:56.199 SYMLINK libspdk_event_bdev.so 00:02:56.458 CC module/event/subsystems/scsi/scsi.o 00:02:56.458 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:56.458 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:56.458 CC module/event/subsystems/ublk/ublk.o 00:02:56.458 CC module/event/subsystems/nbd/nbd.o 00:02:56.458 LIB libspdk_event_nbd.a 00:02:56.458 LIB libspdk_event_ublk.a 00:02:56.458 LIB libspdk_event_scsi.a 00:02:56.716 SO libspdk_event_nbd.so.6.0 00:02:56.716 SO libspdk_event_ublk.so.3.0 00:02:56.716 SO libspdk_event_scsi.so.6.0 00:02:56.716 SYMLINK libspdk_event_nbd.so 00:02:56.716 SYMLINK libspdk_event_ublk.so 00:02:56.716 SYMLINK libspdk_event_scsi.so 00:02:56.716 LIB libspdk_event_nvmf.a 00:02:56.716 SO libspdk_event_nvmf.so.6.0 00:02:56.716 SYMLINK libspdk_event_nvmf.so 00:02:56.716 CC module/event/subsystems/iscsi/iscsi.o 00:02:56.716 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:56.976 LIB libspdk_event_vhost_scsi.a 00:02:56.976 LIB libspdk_event_iscsi.a 00:02:56.976 SO libspdk_event_vhost_scsi.so.3.0 00:02:56.976 SO libspdk_event_iscsi.so.6.0 00:02:56.976 SYMLINK libspdk_event_vhost_scsi.so 00:02:56.976 SYMLINK libspdk_event_iscsi.so 00:02:57.234 SO libspdk.so.6.0 00:02:57.234 SYMLINK libspdk.so 00:02:57.234 CC app/spdk_lspci/spdk_lspci.o 00:02:57.234 CC app/trace_record/trace_record.o 00:02:57.234 CXX app/trace/trace.o 00:02:57.234 CC app/spdk_top/spdk_top.o 00:02:57.494 CC app/spdk_nvme_discover/discovery_aer.o 00:02:57.494 CC test/rpc_client/rpc_client_test.o 00:02:57.494 CC app/spdk_nvme_perf/perf.o 00:02:57.494 TEST_HEADER include/spdk/accel.h 00:02:57.494 TEST_HEADER include/spdk/accel_module.h 00:02:57.494 TEST_HEADER include/spdk/assert.h 00:02:57.494 TEST_HEADER include/spdk/barrier.h 00:02:57.494 TEST_HEADER include/spdk/base64.h 00:02:57.494 TEST_HEADER include/spdk/bdev_module.h 00:02:57.494 TEST_HEADER include/spdk/bdev.h 00:02:57.494 TEST_HEADER include/spdk/bdev_zone.h 00:02:57.494 CC app/spdk_nvme_identify/identify.o 00:02:57.494 TEST_HEADER include/spdk/bit_array.h 00:02:57.494 TEST_HEADER include/spdk/bit_pool.h 00:02:57.494 TEST_HEADER include/spdk/blob_bdev.h 00:02:57.494 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:57.494 TEST_HEADER include/spdk/blobfs.h 00:02:57.494 TEST_HEADER include/spdk/blob.h 00:02:57.494 TEST_HEADER include/spdk/conf.h 00:02:57.494 TEST_HEADER include/spdk/config.h 00:02:57.494 TEST_HEADER include/spdk/cpuset.h 00:02:57.494 TEST_HEADER include/spdk/crc16.h 00:02:57.494 TEST_HEADER include/spdk/crc32.h 00:02:57.494 TEST_HEADER include/spdk/crc64.h 00:02:57.494 TEST_HEADER include/spdk/dma.h 00:02:57.494 TEST_HEADER include/spdk/dif.h 00:02:57.494 TEST_HEADER include/spdk/endian.h 00:02:57.494 TEST_HEADER include/spdk/env_dpdk.h 00:02:57.494 TEST_HEADER include/spdk/env.h 00:02:57.494 TEST_HEADER include/spdk/event.h 00:02:57.494 TEST_HEADER include/spdk/fd_group.h 00:02:57.494 TEST_HEADER include/spdk/fd.h 00:02:57.494 TEST_HEADER include/spdk/file.h 00:02:57.494 TEST_HEADER include/spdk/fsdev.h 00:02:57.494 TEST_HEADER include/spdk/fsdev_module.h 00:02:57.494 TEST_HEADER include/spdk/ftl.h 00:02:57.494 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:57.494 TEST_HEADER include/spdk/gpt_spec.h 00:02:57.494 TEST_HEADER include/spdk/hexlify.h 00:02:57.494 TEST_HEADER include/spdk/histogram_data.h 00:02:57.494 TEST_HEADER include/spdk/idxd.h 00:02:57.494 TEST_HEADER include/spdk/idxd_spec.h 00:02:57.494 TEST_HEADER include/spdk/init.h 00:02:57.494 TEST_HEADER include/spdk/ioat.h 00:02:57.494 TEST_HEADER include/spdk/iscsi_spec.h 00:02:57.494 TEST_HEADER include/spdk/ioat_spec.h 00:02:57.494 TEST_HEADER include/spdk/json.h 00:02:57.494 TEST_HEADER include/spdk/jsonrpc.h 00:02:57.494 TEST_HEADER include/spdk/keyring.h 00:02:57.494 TEST_HEADER include/spdk/keyring_module.h 00:02:57.494 TEST_HEADER include/spdk/likely.h 00:02:57.494 TEST_HEADER include/spdk/log.h 00:02:57.494 TEST_HEADER include/spdk/lvol.h 00:02:57.494 TEST_HEADER include/spdk/md5.h 00:02:57.494 TEST_HEADER include/spdk/memory.h 00:02:57.494 TEST_HEADER include/spdk/mmio.h 00:02:57.494 TEST_HEADER include/spdk/nbd.h 00:02:57.494 TEST_HEADER include/spdk/net.h 00:02:57.494 TEST_HEADER include/spdk/notify.h 00:02:57.494 TEST_HEADER include/spdk/nvme.h 00:02:57.494 TEST_HEADER include/spdk/nvme_intel.h 00:02:57.494 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:57.494 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:57.494 TEST_HEADER include/spdk/nvme_spec.h 00:02:57.494 TEST_HEADER include/spdk/nvme_zns.h 00:02:57.494 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:57.494 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:57.494 TEST_HEADER include/spdk/nvmf.h 00:02:57.494 TEST_HEADER include/spdk/nvmf_spec.h 00:02:57.494 TEST_HEADER include/spdk/nvmf_transport.h 00:02:57.494 TEST_HEADER include/spdk/opal.h 00:02:57.494 TEST_HEADER include/spdk/opal_spec.h 00:02:57.494 TEST_HEADER include/spdk/pci_ids.h 00:02:57.494 TEST_HEADER include/spdk/pipe.h 00:02:57.494 TEST_HEADER include/spdk/reduce.h 00:02:57.494 TEST_HEADER include/spdk/queue.h 00:02:57.494 TEST_HEADER include/spdk/rpc.h 00:02:57.494 TEST_HEADER include/spdk/scsi.h 00:02:57.494 TEST_HEADER include/spdk/scheduler.h 00:02:57.494 TEST_HEADER include/spdk/scsi_spec.h 00:02:57.494 TEST_HEADER include/spdk/sock.h 00:02:57.494 TEST_HEADER include/spdk/stdinc.h 00:02:57.494 TEST_HEADER include/spdk/string.h 00:02:57.494 TEST_HEADER include/spdk/thread.h 00:02:57.494 TEST_HEADER include/spdk/trace.h 00:02:57.494 TEST_HEADER include/spdk/trace_parser.h 00:02:57.494 TEST_HEADER include/spdk/tree.h 00:02:57.494 TEST_HEADER include/spdk/util.h 00:02:57.494 TEST_HEADER include/spdk/ublk.h 00:02:57.494 TEST_HEADER include/spdk/uuid.h 00:02:57.494 TEST_HEADER include/spdk/version.h 00:02:57.494 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:57.494 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:57.494 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:57.494 CC app/spdk_dd/spdk_dd.o 00:02:57.494 TEST_HEADER include/spdk/vhost.h 00:02:57.494 TEST_HEADER include/spdk/vmd.h 00:02:57.494 TEST_HEADER include/spdk/xor.h 00:02:57.494 TEST_HEADER include/spdk/zipf.h 00:02:57.494 CXX test/cpp_headers/accel.o 00:02:57.494 CXX test/cpp_headers/accel_module.o 00:02:57.494 CXX test/cpp_headers/assert.o 00:02:57.494 CXX test/cpp_headers/barrier.o 00:02:57.494 CC app/iscsi_tgt/iscsi_tgt.o 00:02:57.494 CXX test/cpp_headers/base64.o 00:02:57.494 CXX test/cpp_headers/bdev.o 00:02:57.494 CXX test/cpp_headers/bdev_module.o 00:02:57.494 CXX test/cpp_headers/bdev_zone.o 00:02:57.494 CC app/nvmf_tgt/nvmf_main.o 00:02:57.494 CXX test/cpp_headers/bit_array.o 00:02:57.494 CXX test/cpp_headers/bit_pool.o 00:02:57.494 CXX test/cpp_headers/blob_bdev.o 00:02:57.494 CXX test/cpp_headers/blobfs_bdev.o 00:02:57.494 CXX test/cpp_headers/blobfs.o 00:02:57.494 CXX test/cpp_headers/blob.o 00:02:57.494 CXX test/cpp_headers/conf.o 00:02:57.494 CXX test/cpp_headers/config.o 00:02:57.494 CXX test/cpp_headers/cpuset.o 00:02:57.494 CXX test/cpp_headers/crc16.o 00:02:57.494 CXX test/cpp_headers/crc32.o 00:02:57.494 CC app/spdk_tgt/spdk_tgt.o 00:02:57.494 CC test/thread/poller_perf/poller_perf.o 00:02:57.494 CC examples/ioat/perf/perf.o 00:02:57.494 CC test/app/jsoncat/jsoncat.o 00:02:57.494 CC test/env/vtophys/vtophys.o 00:02:57.494 CC app/fio/nvme/fio_plugin.o 00:02:57.494 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:57.494 CC test/env/memory/memory_ut.o 00:02:57.494 CC test/app/histogram_perf/histogram_perf.o 00:02:57.494 CC examples/util/zipf/zipf.o 00:02:57.494 CC test/env/pci/pci_ut.o 00:02:57.494 CC examples/ioat/verify/verify.o 00:02:57.494 CC test/app/stub/stub.o 00:02:57.494 CC app/fio/bdev/fio_plugin.o 00:02:57.494 CC test/dma/test_dma/test_dma.o 00:02:57.494 CC test/app/bdev_svc/bdev_svc.o 00:02:57.758 LINK spdk_lspci 00:02:57.758 CC test/env/mem_callbacks/mem_callbacks.o 00:02:57.758 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:57.758 LINK rpc_client_test 00:02:57.758 LINK interrupt_tgt 00:02:57.758 LINK spdk_nvme_discover 00:02:57.758 LINK nvmf_tgt 00:02:57.758 LINK vtophys 00:02:57.758 LINK poller_perf 00:02:57.758 LINK histogram_perf 00:02:57.758 LINK jsoncat 00:02:58.021 LINK zipf 00:02:58.021 LINK iscsi_tgt 00:02:58.021 CXX test/cpp_headers/crc64.o 00:02:58.021 CXX test/cpp_headers/dif.o 00:02:58.021 CXX test/cpp_headers/dma.o 00:02:58.021 CXX test/cpp_headers/endian.o 00:02:58.021 LINK env_dpdk_post_init 00:02:58.021 CXX test/cpp_headers/env_dpdk.o 00:02:58.021 CXX test/cpp_headers/env.o 00:02:58.021 LINK spdk_trace_record 00:02:58.021 CXX test/cpp_headers/event.o 00:02:58.021 CXX test/cpp_headers/fd_group.o 00:02:58.021 CXX test/cpp_headers/fd.o 00:02:58.021 CXX test/cpp_headers/file.o 00:02:58.021 CXX test/cpp_headers/fsdev.o 00:02:58.021 CXX test/cpp_headers/fsdev_module.o 00:02:58.021 CXX test/cpp_headers/ftl.o 00:02:58.021 LINK ioat_perf 00:02:58.021 CXX test/cpp_headers/fuse_dispatcher.o 00:02:58.021 CXX test/cpp_headers/gpt_spec.o 00:02:58.021 LINK stub 00:02:58.021 CXX test/cpp_headers/hexlify.o 00:02:58.021 LINK spdk_tgt 00:02:58.021 CXX test/cpp_headers/histogram_data.o 00:02:58.021 LINK verify 00:02:58.021 LINK bdev_svc 00:02:58.021 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:58.021 CXX test/cpp_headers/idxd.o 00:02:58.021 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:58.021 CXX test/cpp_headers/idxd_spec.o 00:02:58.286 CXX test/cpp_headers/init.o 00:02:58.286 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:58.286 CXX test/cpp_headers/ioat.o 00:02:58.286 CXX test/cpp_headers/ioat_spec.o 00:02:58.286 CXX test/cpp_headers/iscsi_spec.o 00:02:58.286 LINK spdk_dd 00:02:58.286 CXX test/cpp_headers/json.o 00:02:58.286 CXX test/cpp_headers/jsonrpc.o 00:02:58.286 LINK spdk_trace 00:02:58.286 CXX test/cpp_headers/keyring.o 00:02:58.286 CXX test/cpp_headers/keyring_module.o 00:02:58.286 CXX test/cpp_headers/likely.o 00:02:58.286 CXX test/cpp_headers/log.o 00:02:58.286 CXX test/cpp_headers/lvol.o 00:02:58.286 CXX test/cpp_headers/md5.o 00:02:58.286 CXX test/cpp_headers/memory.o 00:02:58.286 CXX test/cpp_headers/mmio.o 00:02:58.286 LINK pci_ut 00:02:58.286 CXX test/cpp_headers/nbd.o 00:02:58.286 CXX test/cpp_headers/net.o 00:02:58.286 CXX test/cpp_headers/notify.o 00:02:58.286 CXX test/cpp_headers/nvme.o 00:02:58.548 CXX test/cpp_headers/nvme_intel.o 00:02:58.548 CXX test/cpp_headers/nvme_ocssd.o 00:02:58.548 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:58.548 CXX test/cpp_headers/nvme_spec.o 00:02:58.548 CXX test/cpp_headers/nvme_zns.o 00:02:58.548 CXX test/cpp_headers/nvmf_cmd.o 00:02:58.548 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:58.548 CC test/event/event_perf/event_perf.o 00:02:58.548 CC test/event/reactor/reactor.o 00:02:58.548 CXX test/cpp_headers/nvmf.o 00:02:58.548 CC test/event/reactor_perf/reactor_perf.o 00:02:58.548 CXX test/cpp_headers/nvmf_spec.o 00:02:58.548 LINK nvme_fuzz 00:02:58.548 CXX test/cpp_headers/nvmf_transport.o 00:02:58.548 CXX test/cpp_headers/opal.o 00:02:58.548 CXX test/cpp_headers/opal_spec.o 00:02:58.548 CXX test/cpp_headers/pci_ids.o 00:02:58.548 LINK spdk_nvme 00:02:58.548 LINK spdk_bdev 00:02:58.548 CC test/event/app_repeat/app_repeat.o 00:02:58.548 CC examples/vmd/lsvmd/lsvmd.o 00:02:58.549 CXX test/cpp_headers/pipe.o 00:02:58.549 CC examples/sock/hello_world/hello_sock.o 00:02:58.549 CC test/event/scheduler/scheduler.o 00:02:58.549 LINK test_dma 00:02:58.549 CC examples/idxd/perf/perf.o 00:02:58.812 CC examples/thread/thread/thread_ex.o 00:02:58.812 CC examples/vmd/led/led.o 00:02:58.812 CXX test/cpp_headers/queue.o 00:02:58.812 CXX test/cpp_headers/reduce.o 00:02:58.812 CXX test/cpp_headers/rpc.o 00:02:58.812 CXX test/cpp_headers/scheduler.o 00:02:58.812 CXX test/cpp_headers/scsi.o 00:02:58.812 CXX test/cpp_headers/scsi_spec.o 00:02:58.812 CXX test/cpp_headers/sock.o 00:02:58.812 CXX test/cpp_headers/stdinc.o 00:02:58.812 CXX test/cpp_headers/string.o 00:02:58.812 CXX test/cpp_headers/thread.o 00:02:58.812 CXX test/cpp_headers/trace.o 00:02:58.812 CXX test/cpp_headers/trace_parser.o 00:02:58.812 LINK reactor 00:02:58.812 CXX test/cpp_headers/tree.o 00:02:58.812 CXX test/cpp_headers/ublk.o 00:02:58.812 LINK reactor_perf 00:02:58.812 LINK event_perf 00:02:58.812 CXX test/cpp_headers/util.o 00:02:58.812 CXX test/cpp_headers/uuid.o 00:02:58.812 CXX test/cpp_headers/version.o 00:02:58.812 CXX test/cpp_headers/vfio_user_pci.o 00:02:58.812 CXX test/cpp_headers/vfio_user_spec.o 00:02:58.812 CXX test/cpp_headers/vhost.o 00:02:58.812 LINK mem_callbacks 00:02:58.812 CC app/vhost/vhost.o 00:02:58.812 CXX test/cpp_headers/vmd.o 00:02:59.073 LINK spdk_nvme_perf 00:02:59.073 CXX test/cpp_headers/xor.o 00:02:59.073 CXX test/cpp_headers/zipf.o 00:02:59.073 LINK lsvmd 00:02:59.073 LINK app_repeat 00:02:59.073 LINK vhost_fuzz 00:02:59.073 LINK spdk_nvme_identify 00:02:59.073 LINK led 00:02:59.073 LINK hello_sock 00:02:59.073 LINK scheduler 00:02:59.073 LINK spdk_top 00:02:59.073 LINK thread 00:02:59.332 CC test/nvme/reset/reset.o 00:02:59.332 CC test/nvme/fused_ordering/fused_ordering.o 00:02:59.332 CC test/nvme/overhead/overhead.o 00:02:59.332 CC test/nvme/boot_partition/boot_partition.o 00:02:59.332 CC test/nvme/startup/startup.o 00:02:59.332 CC test/nvme/sgl/sgl.o 00:02:59.332 CC test/nvme/reserve/reserve.o 00:02:59.332 CC test/nvme/aer/aer.o 00:02:59.332 CC test/nvme/e2edp/nvme_dp.o 00:02:59.333 CC test/nvme/connect_stress/connect_stress.o 00:02:59.333 CC test/nvme/compliance/nvme_compliance.o 00:02:59.333 CC test/nvme/err_injection/err_injection.o 00:02:59.333 CC test/nvme/simple_copy/simple_copy.o 00:02:59.333 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:59.333 LINK idxd_perf 00:02:59.333 LINK vhost 00:02:59.333 CC test/nvme/cuse/cuse.o 00:02:59.333 CC test/nvme/fdp/fdp.o 00:02:59.333 CC test/blobfs/mkfs/mkfs.o 00:02:59.333 CC test/accel/dif/dif.o 00:02:59.333 CC test/lvol/esnap/esnap.o 00:02:59.591 LINK startup 00:02:59.591 LINK connect_stress 00:02:59.591 LINK err_injection 00:02:59.591 LINK doorbell_aers 00:02:59.591 CC examples/nvme/abort/abort.o 00:02:59.591 CC examples/nvme/hello_world/hello_world.o 00:02:59.591 CC examples/nvme/hotplug/hotplug.o 00:02:59.591 LINK fused_ordering 00:02:59.591 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:59.591 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:59.591 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:59.591 CC examples/nvme/reconnect/reconnect.o 00:02:59.591 CC examples/nvme/arbitration/arbitration.o 00:02:59.591 LINK simple_copy 00:02:59.591 LINK mkfs 00:02:59.591 LINK boot_partition 00:02:59.591 LINK reset 00:02:59.591 LINK nvme_dp 00:02:59.591 CC examples/accel/perf/accel_perf.o 00:02:59.591 LINK sgl 00:02:59.591 LINK aer 00:02:59.591 LINK reserve 00:02:59.850 CC examples/blob/cli/blobcli.o 00:02:59.850 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:59.850 CC examples/blob/hello_world/hello_blob.o 00:02:59.850 LINK memory_ut 00:02:59.850 LINK nvme_compliance 00:02:59.850 LINK overhead 00:02:59.850 LINK pmr_persistence 00:02:59.850 LINK cmb_copy 00:02:59.850 LINK fdp 00:03:00.108 LINK hotplug 00:03:00.108 LINK hello_world 00:03:00.108 LINK reconnect 00:03:00.108 LINK arbitration 00:03:00.108 LINK hello_blob 00:03:00.108 LINK hello_fsdev 00:03:00.108 LINK abort 00:03:00.108 LINK nvme_manage 00:03:00.108 LINK dif 00:03:00.108 LINK accel_perf 00:03:00.365 LINK blobcli 00:03:00.623 CC test/bdev/bdevio/bdevio.o 00:03:00.623 CC examples/bdev/hello_world/hello_bdev.o 00:03:00.623 CC examples/bdev/bdevperf/bdevperf.o 00:03:00.623 LINK iscsi_fuzz 00:03:00.888 LINK hello_bdev 00:03:00.888 LINK bdevio 00:03:00.888 LINK cuse 00:03:01.453 LINK bdevperf 00:03:01.711 CC examples/nvmf/nvmf/nvmf.o 00:03:02.276 LINK nvmf 00:03:04.806 LINK esnap 00:03:04.806 00:03:04.806 real 1m10.527s 00:03:04.806 user 11m50.276s 00:03:04.806 sys 2m39.067s 00:03:04.806 03:50:59 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:04.806 03:50:59 make -- common/autotest_common.sh@10 -- $ set +x 00:03:04.806 ************************************ 00:03:04.806 END TEST make 00:03:04.806 ************************************ 00:03:05.065 03:50:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:05.065 03:50:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:05.065 03:50:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:05.065 03:50:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.065 03:50:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:05.065 03:50:59 -- pm/common@44 -- $ pid=2192649 00:03:05.065 03:50:59 -- pm/common@50 -- $ kill -TERM 2192649 00:03:05.065 03:50:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.065 03:50:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:05.065 03:50:59 -- pm/common@44 -- $ pid=2192651 00:03:05.065 03:50:59 -- pm/common@50 -- $ kill -TERM 2192651 00:03:05.065 03:50:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.065 03:50:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:05.065 03:50:59 -- pm/common@44 -- $ pid=2192652 00:03:05.065 03:50:59 -- pm/common@50 -- $ kill -TERM 2192652 00:03:05.065 03:50:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.065 03:50:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:05.065 03:50:59 -- pm/common@44 -- $ pid=2192682 00:03:05.065 03:50:59 -- pm/common@50 -- $ sudo -E kill -TERM 2192682 00:03:05.065 03:50:59 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:05.065 03:50:59 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:05.065 03:50:59 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:05.065 03:50:59 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:05.065 03:50:59 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:05.065 03:50:59 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:05.065 03:50:59 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:05.065 03:50:59 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:05.065 03:50:59 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:05.065 03:50:59 -- scripts/common.sh@336 -- # IFS=.-: 00:03:05.065 03:50:59 -- scripts/common.sh@336 -- # read -ra ver1 00:03:05.065 03:50:59 -- scripts/common.sh@337 -- # IFS=.-: 00:03:05.065 03:50:59 -- scripts/common.sh@337 -- # read -ra ver2 00:03:05.065 03:50:59 -- scripts/common.sh@338 -- # local 'op=<' 00:03:05.065 03:50:59 -- scripts/common.sh@340 -- # ver1_l=2 00:03:05.065 03:50:59 -- scripts/common.sh@341 -- # ver2_l=1 00:03:05.065 03:50:59 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:05.065 03:50:59 -- scripts/common.sh@344 -- # case "$op" in 00:03:05.065 03:50:59 -- scripts/common.sh@345 -- # : 1 00:03:05.065 03:50:59 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:05.065 03:50:59 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:05.065 03:50:59 -- scripts/common.sh@365 -- # decimal 1 00:03:05.065 03:50:59 -- scripts/common.sh@353 -- # local d=1 00:03:05.065 03:50:59 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:05.065 03:50:59 -- scripts/common.sh@355 -- # echo 1 00:03:05.065 03:50:59 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:05.065 03:50:59 -- scripts/common.sh@366 -- # decimal 2 00:03:05.065 03:50:59 -- scripts/common.sh@353 -- # local d=2 00:03:05.065 03:50:59 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:05.065 03:50:59 -- scripts/common.sh@355 -- # echo 2 00:03:05.065 03:50:59 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:05.065 03:50:59 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:05.065 03:50:59 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:05.065 03:50:59 -- scripts/common.sh@368 -- # return 0 00:03:05.065 03:50:59 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:05.065 03:50:59 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:05.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.065 --rc genhtml_branch_coverage=1 00:03:05.065 --rc genhtml_function_coverage=1 00:03:05.065 --rc genhtml_legend=1 00:03:05.065 --rc geninfo_all_blocks=1 00:03:05.065 --rc geninfo_unexecuted_blocks=1 00:03:05.065 00:03:05.065 ' 00:03:05.065 03:50:59 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:05.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.065 --rc genhtml_branch_coverage=1 00:03:05.065 --rc genhtml_function_coverage=1 00:03:05.065 --rc genhtml_legend=1 00:03:05.065 --rc geninfo_all_blocks=1 00:03:05.065 --rc geninfo_unexecuted_blocks=1 00:03:05.065 00:03:05.065 ' 00:03:05.065 03:50:59 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:05.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.065 --rc genhtml_branch_coverage=1 00:03:05.065 --rc genhtml_function_coverage=1 00:03:05.065 --rc genhtml_legend=1 00:03:05.065 --rc geninfo_all_blocks=1 00:03:05.065 --rc geninfo_unexecuted_blocks=1 00:03:05.065 00:03:05.065 ' 00:03:05.065 03:50:59 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:05.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.065 --rc genhtml_branch_coverage=1 00:03:05.065 --rc genhtml_function_coverage=1 00:03:05.065 --rc genhtml_legend=1 00:03:05.065 --rc geninfo_all_blocks=1 00:03:05.065 --rc geninfo_unexecuted_blocks=1 00:03:05.065 00:03:05.065 ' 00:03:05.065 03:50:59 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:05.065 03:50:59 -- nvmf/common.sh@7 -- # uname -s 00:03:05.065 03:50:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:05.065 03:50:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:05.065 03:50:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:05.065 03:50:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:05.066 03:50:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:05.066 03:50:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:05.066 03:50:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:05.066 03:50:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:05.066 03:50:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:05.066 03:50:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:05.066 03:50:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:05.066 03:50:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:05.066 03:50:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:05.066 03:50:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:05.066 03:50:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:05.066 03:50:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:05.066 03:50:59 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:05.066 03:50:59 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:05.066 03:50:59 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:05.066 03:50:59 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:05.066 03:50:59 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:05.066 03:50:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.066 03:50:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.066 03:50:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.066 03:50:59 -- paths/export.sh@5 -- # export PATH 00:03:05.066 03:50:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.066 03:50:59 -- nvmf/common.sh@51 -- # : 0 00:03:05.066 03:50:59 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:05.066 03:50:59 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:05.066 03:50:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:05.066 03:50:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:05.066 03:50:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:05.066 03:50:59 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:05.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:05.066 03:50:59 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:05.066 03:50:59 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:05.066 03:50:59 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:05.066 03:50:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:05.066 03:50:59 -- spdk/autotest.sh@32 -- # uname -s 00:03:05.066 03:50:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:05.066 03:50:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:05.066 03:50:59 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:05.066 03:50:59 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:05.066 03:50:59 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:05.066 03:50:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:05.066 03:50:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:05.066 03:50:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:05.066 03:50:59 -- spdk/autotest.sh@48 -- # udevadm_pid=2252118 00:03:05.066 03:50:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:05.066 03:50:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:05.066 03:50:59 -- pm/common@17 -- # local monitor 00:03:05.066 03:50:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.066 03:50:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.066 03:50:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.066 03:50:59 -- pm/common@21 -- # date +%s 00:03:05.066 03:50:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.066 03:50:59 -- pm/common@21 -- # date +%s 00:03:05.066 03:50:59 -- pm/common@25 -- # sleep 1 00:03:05.066 03:50:59 -- pm/common@21 -- # date +%s 00:03:05.066 03:50:59 -- pm/common@21 -- # date +%s 00:03:05.066 03:50:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733799059 00:03:05.066 03:50:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733799059 00:03:05.066 03:50:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733799059 00:03:05.066 03:50:59 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733799059 00:03:05.326 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733799059_collect-cpu-load.pm.log 00:03:05.326 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733799059_collect-vmstat.pm.log 00:03:05.326 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733799059_collect-cpu-temp.pm.log 00:03:05.326 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733799059_collect-bmc-pm.bmc.pm.log 00:03:06.265 03:51:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:06.265 03:51:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:06.265 03:51:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:06.265 03:51:00 -- common/autotest_common.sh@10 -- # set +x 00:03:06.265 03:51:00 -- spdk/autotest.sh@59 -- # create_test_list 00:03:06.265 03:51:00 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:06.265 03:51:00 -- common/autotest_common.sh@10 -- # set +x 00:03:06.265 03:51:00 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:06.265 03:51:00 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:06.265 03:51:00 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:06.265 03:51:00 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:06.265 03:51:00 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:06.265 03:51:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:06.265 03:51:00 -- common/autotest_common.sh@1457 -- # uname 00:03:06.265 03:51:00 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:06.265 03:51:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:06.265 03:51:00 -- common/autotest_common.sh@1477 -- # uname 00:03:06.265 03:51:00 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:06.265 03:51:00 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:06.265 03:51:00 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:06.265 lcov: LCOV version 1.15 00:03:06.265 03:51:00 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:24.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:24.441 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:46.388 03:51:37 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:46.388 03:51:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:46.388 03:51:37 -- common/autotest_common.sh@10 -- # set +x 00:03:46.388 03:51:37 -- spdk/autotest.sh@78 -- # rm -f 00:03:46.388 03:51:37 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.388 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:46.388 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:46.388 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:46.388 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:46.388 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:46.388 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:46.388 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:46.388 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:46.388 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:46.388 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:46.388 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:46.388 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:46.388 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:46.388 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:46.388 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:46.388 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:46.388 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:46.388 03:51:38 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:46.388 03:51:38 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:46.388 03:51:38 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:46.388 03:51:38 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:46.388 03:51:38 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:46.388 03:51:38 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:46.388 03:51:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:46.388 03:51:38 -- common/autotest_common.sh@1669 -- # bdf=0000:88:00.0 00:03:46.388 03:51:38 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:46.388 03:51:38 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:46.388 03:51:38 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:46.388 03:51:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:46.388 03:51:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:46.388 03:51:38 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:46.388 03:51:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:46.388 03:51:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:46.388 03:51:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:46.388 03:51:38 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:46.388 03:51:38 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:46.388 No valid GPT data, bailing 00:03:46.388 03:51:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:46.388 03:51:38 -- scripts/common.sh@394 -- # pt= 00:03:46.388 03:51:38 -- scripts/common.sh@395 -- # return 1 00:03:46.388 03:51:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:46.388 1+0 records in 00:03:46.388 1+0 records out 00:03:46.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00171729 s, 611 MB/s 00:03:46.388 03:51:38 -- spdk/autotest.sh@105 -- # sync 00:03:46.388 03:51:38 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:46.388 03:51:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:46.388 03:51:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:46.648 03:51:40 -- spdk/autotest.sh@111 -- # uname -s 00:03:46.648 03:51:40 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:46.648 03:51:40 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:46.648 03:51:40 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:48.030 Hugepages 00:03:48.030 node hugesize free / total 00:03:48.030 node0 1048576kB 0 / 0 00:03:48.030 node0 2048kB 0 / 0 00:03:48.030 node1 1048576kB 0 / 0 00:03:48.030 node1 2048kB 0 / 0 00:03:48.030 00:03:48.030 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:48.030 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:48.030 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:48.030 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:48.030 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:48.030 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:48.030 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:48.030 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:48.030 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:48.030 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:48.030 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:48.030 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:48.030 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:48.030 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:48.030 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:48.030 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:48.030 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:48.030 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:48.030 03:51:42 -- spdk/autotest.sh@117 -- # uname -s 00:03:48.030 03:51:42 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:48.030 03:51:42 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:48.030 03:51:42 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.408 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:49.408 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:49.408 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:49.408 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:49.408 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:49.408 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:49.408 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:49.408 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:49.408 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:49.408 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:49.408 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:49.408 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:49.408 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:49.408 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:49.408 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:49.408 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:50.346 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:50.346 03:51:44 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:51.288 03:51:45 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:51.288 03:51:45 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:51.288 03:51:45 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:51.288 03:51:45 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:51.288 03:51:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:51.288 03:51:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:51.288 03:51:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:51.288 03:51:45 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:51.288 03:51:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:51.288 03:51:45 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:51.288 03:51:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:51.288 03:51:45 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.667 Waiting for block devices as requested 00:03:52.667 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:52.667 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:52.927 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:52.927 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:52.927 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:52.927 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:53.185 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:53.185 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:53.185 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:53.185 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:53.445 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:53.445 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:53.445 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:53.704 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:53.704 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:53.704 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:53.704 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:53.964 03:51:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:53.964 03:51:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:53.965 03:51:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:53.965 03:51:48 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:03:53.965 03:51:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:53.965 03:51:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:53.965 03:51:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:53.965 03:51:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:53.965 03:51:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:53.965 03:51:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:53.965 03:51:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:53.965 03:51:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:53.965 03:51:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:53.965 03:51:48 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:53.965 03:51:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:53.965 03:51:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:53.965 03:51:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:53.965 03:51:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:53.965 03:51:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:53.965 03:51:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:53.965 03:51:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:53.965 03:51:48 -- common/autotest_common.sh@1543 -- # continue 00:03:53.965 03:51:48 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:53.965 03:51:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:53.965 03:51:48 -- common/autotest_common.sh@10 -- # set +x 00:03:53.965 03:51:48 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:53.965 03:51:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.965 03:51:48 -- common/autotest_common.sh@10 -- # set +x 00:03:53.965 03:51:48 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.347 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:55.347 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:55.347 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:55.347 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:55.347 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:55.347 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:55.347 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:55.347 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:55.347 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:55.347 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:55.347 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:55.347 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:55.347 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:55.347 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:55.347 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:55.347 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:56.285 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:56.285 03:51:50 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:56.285 03:51:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:56.285 03:51:50 -- common/autotest_common.sh@10 -- # set +x 00:03:56.543 03:51:50 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:56.543 03:51:50 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:56.543 03:51:50 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:56.543 03:51:50 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:56.543 03:51:50 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:56.543 03:51:50 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:56.543 03:51:50 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:56.543 03:51:50 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:56.543 03:51:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:56.543 03:51:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:56.543 03:51:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:56.543 03:51:50 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:56.543 03:51:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:56.543 03:51:50 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:56.543 03:51:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:56.543 03:51:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:56.543 03:51:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:56.543 03:51:50 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:56.543 03:51:50 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:56.543 03:51:50 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:56.543 03:51:50 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:56.543 03:51:50 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:03:56.543 03:51:50 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:03:56.543 03:51:50 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2263071 00:03:56.543 03:51:50 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.543 03:51:50 -- common/autotest_common.sh@1585 -- # waitforlisten 2263071 00:03:56.543 03:51:50 -- common/autotest_common.sh@835 -- # '[' -z 2263071 ']' 00:03:56.543 03:51:50 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.543 03:51:50 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:56.544 03:51:50 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.544 03:51:50 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:56.544 03:51:50 -- common/autotest_common.sh@10 -- # set +x 00:03:56.544 [2024-12-10 03:51:50.803113] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:03:56.544 [2024-12-10 03:51:50.803191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2263071 ] 00:03:56.544 [2024-12-10 03:51:50.867991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.801 [2024-12-10 03:51:50.929337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.058 03:51:51 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:57.058 03:51:51 -- common/autotest_common.sh@868 -- # return 0 00:03:57.058 03:51:51 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:57.058 03:51:51 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:57.058 03:51:51 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:00.343 nvme0n1 00:04:00.343 03:51:54 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:00.343 [2024-12-10 03:51:54.537008] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:00.343 [2024-12-10 03:51:54.537053] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:00.343 request: 00:04:00.343 { 00:04:00.343 "nvme_ctrlr_name": "nvme0", 00:04:00.343 "password": "test", 00:04:00.343 "method": "bdev_nvme_opal_revert", 00:04:00.343 "req_id": 1 00:04:00.343 } 00:04:00.343 Got JSON-RPC error response 00:04:00.343 response: 00:04:00.343 { 00:04:00.343 "code": -32603, 00:04:00.343 "message": "Internal error" 00:04:00.343 } 00:04:00.343 03:51:54 -- common/autotest_common.sh@1591 -- # true 00:04:00.343 03:51:54 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:00.343 03:51:54 -- common/autotest_common.sh@1595 -- # killprocess 2263071 00:04:00.343 03:51:54 -- common/autotest_common.sh@954 -- # '[' -z 2263071 ']' 00:04:00.343 03:51:54 -- common/autotest_common.sh@958 -- # kill -0 2263071 00:04:00.343 03:51:54 -- common/autotest_common.sh@959 -- # uname 00:04:00.343 03:51:54 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.343 03:51:54 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2263071 00:04:00.343 03:51:54 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.343 03:51:54 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.343 03:51:54 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2263071' 00:04:00.343 killing process with pid 2263071 00:04:00.343 03:51:54 -- common/autotest_common.sh@973 -- # kill 2263071 00:04:00.343 03:51:54 -- common/autotest_common.sh@978 -- # wait 2263071 00:04:02.239 03:51:56 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:02.239 03:51:56 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:02.239 03:51:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:02.239 03:51:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:02.239 03:51:56 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:02.239 03:51:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.239 03:51:56 -- common/autotest_common.sh@10 -- # set +x 00:04:02.239 03:51:56 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:02.239 03:51:56 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:02.239 03:51:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.239 03:51:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.239 03:51:56 -- common/autotest_common.sh@10 -- # set +x 00:04:02.239 ************************************ 00:04:02.239 START TEST env 00:04:02.239 ************************************ 00:04:02.240 03:51:56 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:02.240 * Looking for test storage... 00:04:02.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:02.240 03:51:56 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:02.240 03:51:56 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:02.240 03:51:56 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:02.240 03:51:56 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:02.240 03:51:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.240 03:51:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.240 03:51:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.240 03:51:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.240 03:51:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.240 03:51:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.240 03:51:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.240 03:51:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.240 03:51:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.240 03:51:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.240 03:51:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.240 03:51:56 env -- scripts/common.sh@344 -- # case "$op" in 00:04:02.240 03:51:56 env -- scripts/common.sh@345 -- # : 1 00:04:02.240 03:51:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.240 03:51:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.240 03:51:56 env -- scripts/common.sh@365 -- # decimal 1 00:04:02.240 03:51:56 env -- scripts/common.sh@353 -- # local d=1 00:04:02.240 03:51:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.240 03:51:56 env -- scripts/common.sh@355 -- # echo 1 00:04:02.240 03:51:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.240 03:51:56 env -- scripts/common.sh@366 -- # decimal 2 00:04:02.240 03:51:56 env -- scripts/common.sh@353 -- # local d=2 00:04:02.240 03:51:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.240 03:51:56 env -- scripts/common.sh@355 -- # echo 2 00:04:02.240 03:51:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.240 03:51:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.240 03:51:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.240 03:51:56 env -- scripts/common.sh@368 -- # return 0 00:04:02.240 03:51:56 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.240 03:51:56 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:02.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.240 --rc genhtml_branch_coverage=1 00:04:02.240 --rc genhtml_function_coverage=1 00:04:02.240 --rc genhtml_legend=1 00:04:02.240 --rc geninfo_all_blocks=1 00:04:02.240 --rc geninfo_unexecuted_blocks=1 00:04:02.240 00:04:02.240 ' 00:04:02.240 03:51:56 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:02.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.240 --rc genhtml_branch_coverage=1 00:04:02.240 --rc genhtml_function_coverage=1 00:04:02.240 --rc genhtml_legend=1 00:04:02.240 --rc geninfo_all_blocks=1 00:04:02.240 --rc geninfo_unexecuted_blocks=1 00:04:02.240 00:04:02.240 ' 00:04:02.240 03:51:56 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:02.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.240 --rc genhtml_branch_coverage=1 00:04:02.240 --rc genhtml_function_coverage=1 00:04:02.240 --rc genhtml_legend=1 00:04:02.240 --rc geninfo_all_blocks=1 00:04:02.240 --rc geninfo_unexecuted_blocks=1 00:04:02.240 00:04:02.240 ' 00:04:02.240 03:51:56 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:02.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.240 --rc genhtml_branch_coverage=1 00:04:02.240 --rc genhtml_function_coverage=1 00:04:02.240 --rc genhtml_legend=1 00:04:02.240 --rc geninfo_all_blocks=1 00:04:02.240 --rc geninfo_unexecuted_blocks=1 00:04:02.240 00:04:02.240 ' 00:04:02.240 03:51:56 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:02.240 03:51:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.240 03:51:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.240 03:51:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.240 ************************************ 00:04:02.240 START TEST env_memory 00:04:02.240 ************************************ 00:04:02.240 03:51:56 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:02.240 00:04:02.240 00:04:02.240 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.240 http://cunit.sourceforge.net/ 00:04:02.240 00:04:02.240 00:04:02.240 Suite: memory 00:04:02.240 Test: alloc and free memory map ...[2024-12-10 03:51:56.610688] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:02.240 passed 00:04:02.498 Test: mem map translation ...[2024-12-10 03:51:56.631318] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:02.498 [2024-12-10 03:51:56.631338] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:02.498 [2024-12-10 03:51:56.631392] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:02.499 [2024-12-10 03:51:56.631403] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:02.499 passed 00:04:02.499 Test: mem map registration ...[2024-12-10 03:51:56.674525] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:02.499 [2024-12-10 03:51:56.674568] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:02.499 passed 00:04:02.499 Test: mem map adjacent registrations ...passed 00:04:02.499 00:04:02.499 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.499 suites 1 1 n/a 0 0 00:04:02.499 tests 4 4 4 0 0 00:04:02.499 asserts 152 152 152 0 n/a 00:04:02.499 00:04:02.499 Elapsed time = 0.148 seconds 00:04:02.499 00:04:02.499 real 0m0.156s 00:04:02.499 user 0m0.147s 00:04:02.499 sys 0m0.009s 00:04:02.499 03:51:56 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.499 03:51:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:02.499 ************************************ 00:04:02.499 END TEST env_memory 00:04:02.499 ************************************ 00:04:02.499 03:51:56 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:02.499 03:51:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.499 03:51:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.499 03:51:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.499 ************************************ 00:04:02.499 START TEST env_vtophys 00:04:02.499 ************************************ 00:04:02.499 03:51:56 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:02.499 EAL: lib.eal log level changed from notice to debug 00:04:02.499 EAL: Detected lcore 0 as core 0 on socket 0 00:04:02.499 EAL: Detected lcore 1 as core 1 on socket 0 00:04:02.499 EAL: Detected lcore 2 as core 2 on socket 0 00:04:02.499 EAL: Detected lcore 3 as core 3 on socket 0 00:04:02.499 EAL: Detected lcore 4 as core 4 on socket 0 00:04:02.499 EAL: Detected lcore 5 as core 5 on socket 0 00:04:02.499 EAL: Detected lcore 6 as core 8 on socket 0 00:04:02.499 EAL: Detected lcore 7 as core 9 on socket 0 00:04:02.499 EAL: Detected lcore 8 as core 10 on socket 0 00:04:02.499 EAL: Detected lcore 9 as core 11 on socket 0 00:04:02.499 EAL: Detected lcore 10 as core 12 on socket 0 00:04:02.499 EAL: Detected lcore 11 as core 13 on socket 0 00:04:02.499 EAL: Detected lcore 12 as core 0 on socket 1 00:04:02.499 EAL: Detected lcore 13 as core 1 on socket 1 00:04:02.499 EAL: Detected lcore 14 as core 2 on socket 1 00:04:02.499 EAL: Detected lcore 15 as core 3 on socket 1 00:04:02.499 EAL: Detected lcore 16 as core 4 on socket 1 00:04:02.499 EAL: Detected lcore 17 as core 5 on socket 1 00:04:02.499 EAL: Detected lcore 18 as core 8 on socket 1 00:04:02.499 EAL: Detected lcore 19 as core 9 on socket 1 00:04:02.499 EAL: Detected lcore 20 as core 10 on socket 1 00:04:02.499 EAL: Detected lcore 21 as core 11 on socket 1 00:04:02.499 EAL: Detected lcore 22 as core 12 on socket 1 00:04:02.499 EAL: Detected lcore 23 as core 13 on socket 1 00:04:02.499 EAL: Detected lcore 24 as core 0 on socket 0 00:04:02.499 EAL: Detected lcore 25 as core 1 on socket 0 00:04:02.499 EAL: Detected lcore 26 as core 2 on socket 0 00:04:02.499 EAL: Detected lcore 27 as core 3 on socket 0 00:04:02.499 EAL: Detected lcore 28 as core 4 on socket 0 00:04:02.499 EAL: Detected lcore 29 as core 5 on socket 0 00:04:02.499 EAL: Detected lcore 30 as core 8 on socket 0 00:04:02.499 EAL: Detected lcore 31 as core 9 on socket 0 00:04:02.499 EAL: Detected lcore 32 as core 10 on socket 0 00:04:02.499 EAL: Detected lcore 33 as core 11 on socket 0 00:04:02.499 EAL: Detected lcore 34 as core 12 on socket 0 00:04:02.499 EAL: Detected lcore 35 as core 13 on socket 0 00:04:02.499 EAL: Detected lcore 36 as core 0 on socket 1 00:04:02.499 EAL: Detected lcore 37 as core 1 on socket 1 00:04:02.499 EAL: Detected lcore 38 as core 2 on socket 1 00:04:02.499 EAL: Detected lcore 39 as core 3 on socket 1 00:04:02.499 EAL: Detected lcore 40 as core 4 on socket 1 00:04:02.499 EAL: Detected lcore 41 as core 5 on socket 1 00:04:02.499 EAL: Detected lcore 42 as core 8 on socket 1 00:04:02.499 EAL: Detected lcore 43 as core 9 on socket 1 00:04:02.499 EAL: Detected lcore 44 as core 10 on socket 1 00:04:02.499 EAL: Detected lcore 45 as core 11 on socket 1 00:04:02.499 EAL: Detected lcore 46 as core 12 on socket 1 00:04:02.499 EAL: Detected lcore 47 as core 13 on socket 1 00:04:02.499 EAL: Maximum logical cores by configuration: 128 00:04:02.499 EAL: Detected CPU lcores: 48 00:04:02.499 EAL: Detected NUMA nodes: 2 00:04:02.499 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:02.499 EAL: Detected shared linkage of DPDK 00:04:02.499 EAL: No shared files mode enabled, IPC will be disabled 00:04:02.499 EAL: Bus pci wants IOVA as 'DC' 00:04:02.499 EAL: Buses did not request a specific IOVA mode. 00:04:02.499 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:02.499 EAL: Selected IOVA mode 'VA' 00:04:02.499 EAL: Probing VFIO support... 00:04:02.499 EAL: IOMMU type 1 (Type 1) is supported 00:04:02.499 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:02.499 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:02.499 EAL: VFIO support initialized 00:04:02.499 EAL: Ask a virtual area of 0x2e000 bytes 00:04:02.499 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:02.499 EAL: Setting up physically contiguous memory... 00:04:02.499 EAL: Setting maximum number of open files to 524288 00:04:02.499 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:02.499 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:02.499 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:02.499 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.499 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:02.499 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.499 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.499 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:02.499 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:02.499 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.499 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:02.499 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.499 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.499 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:02.499 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:02.499 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.499 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:02.499 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.499 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.499 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:02.499 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:02.499 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.499 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:02.499 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.499 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.499 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:02.499 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:02.499 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:02.499 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.499 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:02.499 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:02.499 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.499 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:02.499 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:02.499 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.499 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:02.499 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:02.499 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.499 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:02.499 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:02.499 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.499 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:02.499 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:02.499 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.499 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:02.499 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:02.499 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.499 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:02.499 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:02.499 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.499 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:02.499 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:02.499 EAL: Hugepages will be freed exactly as allocated. 00:04:02.499 EAL: No shared files mode enabled, IPC is disabled 00:04:02.499 EAL: No shared files mode enabled, IPC is disabled 00:04:02.499 EAL: TSC frequency is ~2700000 KHz 00:04:02.499 EAL: Main lcore 0 is ready (tid=7f5d86bb0a00;cpuset=[0]) 00:04:02.499 EAL: Trying to obtain current memory policy. 00:04:02.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.499 EAL: Restoring previous memory policy: 0 00:04:02.499 EAL: request: mp_malloc_sync 00:04:02.499 EAL: No shared files mode enabled, IPC is disabled 00:04:02.499 EAL: Heap on socket 0 was expanded by 2MB 00:04:02.499 EAL: No shared files mode enabled, IPC is disabled 00:04:02.499 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:02.499 EAL: Mem event callback 'spdk:(nil)' registered 00:04:02.499 00:04:02.499 00:04:02.499 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.499 http://cunit.sourceforge.net/ 00:04:02.499 00:04:02.499 00:04:02.499 Suite: components_suite 00:04:02.499 Test: vtophys_malloc_test ...passed 00:04:02.499 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:02.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.499 EAL: Restoring previous memory policy: 4 00:04:02.499 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.499 EAL: request: mp_malloc_sync 00:04:02.499 EAL: No shared files mode enabled, IPC is disabled 00:04:02.499 EAL: Heap on socket 0 was expanded by 4MB 00:04:02.499 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.499 EAL: request: mp_malloc_sync 00:04:02.499 EAL: No shared files mode enabled, IPC is disabled 00:04:02.499 EAL: Heap on socket 0 was shrunk by 4MB 00:04:02.499 EAL: Trying to obtain current memory policy. 00:04:02.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.499 EAL: Restoring previous memory policy: 4 00:04:02.499 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.499 EAL: request: mp_malloc_sync 00:04:02.499 EAL: No shared files mode enabled, IPC is disabled 00:04:02.499 EAL: Heap on socket 0 was expanded by 6MB 00:04:02.499 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.499 EAL: request: mp_malloc_sync 00:04:02.499 EAL: No shared files mode enabled, IPC is disabled 00:04:02.499 EAL: Heap on socket 0 was shrunk by 6MB 00:04:02.499 EAL: Trying to obtain current memory policy. 00:04:02.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.499 EAL: Restoring previous memory policy: 4 00:04:02.499 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.499 EAL: request: mp_malloc_sync 00:04:02.499 EAL: No shared files mode enabled, IPC is disabled 00:04:02.499 EAL: Heap on socket 0 was expanded by 10MB 00:04:02.499 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.499 EAL: request: mp_malloc_sync 00:04:02.499 EAL: No shared files mode enabled, IPC is disabled 00:04:02.499 EAL: Heap on socket 0 was shrunk by 10MB 00:04:02.499 EAL: Trying to obtain current memory policy. 00:04:02.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.499 EAL: Restoring previous memory policy: 4 00:04:02.499 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.499 EAL: request: mp_malloc_sync 00:04:02.499 EAL: No shared files mode enabled, IPC is disabled 00:04:02.499 EAL: Heap on socket 0 was expanded by 18MB 00:04:02.499 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.499 EAL: request: mp_malloc_sync 00:04:02.499 EAL: No shared files mode enabled, IPC is disabled 00:04:02.499 EAL: Heap on socket 0 was shrunk by 18MB 00:04:02.499 EAL: Trying to obtain current memory policy. 00:04:02.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.758 EAL: Restoring previous memory policy: 4 00:04:02.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.758 EAL: request: mp_malloc_sync 00:04:02.758 EAL: No shared files mode enabled, IPC is disabled 00:04:02.758 EAL: Heap on socket 0 was expanded by 34MB 00:04:02.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.758 EAL: request: mp_malloc_sync 00:04:02.758 EAL: No shared files mode enabled, IPC is disabled 00:04:02.758 EAL: Heap on socket 0 was shrunk by 34MB 00:04:02.758 EAL: Trying to obtain current memory policy. 00:04:02.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.758 EAL: Restoring previous memory policy: 4 00:04:02.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.758 EAL: request: mp_malloc_sync 00:04:02.758 EAL: No shared files mode enabled, IPC is disabled 00:04:02.758 EAL: Heap on socket 0 was expanded by 66MB 00:04:02.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.758 EAL: request: mp_malloc_sync 00:04:02.758 EAL: No shared files mode enabled, IPC is disabled 00:04:02.758 EAL: Heap on socket 0 was shrunk by 66MB 00:04:02.758 EAL: Trying to obtain current memory policy. 00:04:02.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.758 EAL: Restoring previous memory policy: 4 00:04:02.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.758 EAL: request: mp_malloc_sync 00:04:02.758 EAL: No shared files mode enabled, IPC is disabled 00:04:02.758 EAL: Heap on socket 0 was expanded by 130MB 00:04:02.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.758 EAL: request: mp_malloc_sync 00:04:02.758 EAL: No shared files mode enabled, IPC is disabled 00:04:02.758 EAL: Heap on socket 0 was shrunk by 130MB 00:04:02.758 EAL: Trying to obtain current memory policy. 00:04:02.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.758 EAL: Restoring previous memory policy: 4 00:04:02.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.758 EAL: request: mp_malloc_sync 00:04:02.758 EAL: No shared files mode enabled, IPC is disabled 00:04:02.758 EAL: Heap on socket 0 was expanded by 258MB 00:04:02.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.016 EAL: request: mp_malloc_sync 00:04:03.016 EAL: No shared files mode enabled, IPC is disabled 00:04:03.016 EAL: Heap on socket 0 was shrunk by 258MB 00:04:03.016 EAL: Trying to obtain current memory policy. 00:04:03.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.016 EAL: Restoring previous memory policy: 4 00:04:03.016 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.016 EAL: request: mp_malloc_sync 00:04:03.016 EAL: No shared files mode enabled, IPC is disabled 00:04:03.016 EAL: Heap on socket 0 was expanded by 514MB 00:04:03.274 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.274 EAL: request: mp_malloc_sync 00:04:03.274 EAL: No shared files mode enabled, IPC is disabled 00:04:03.274 EAL: Heap on socket 0 was shrunk by 514MB 00:04:03.274 EAL: Trying to obtain current memory policy. 00:04:03.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.533 EAL: Restoring previous memory policy: 4 00:04:03.533 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.533 EAL: request: mp_malloc_sync 00:04:03.533 EAL: No shared files mode enabled, IPC is disabled 00:04:03.533 EAL: Heap on socket 0 was expanded by 1026MB 00:04:03.791 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.048 EAL: request: mp_malloc_sync 00:04:04.048 EAL: No shared files mode enabled, IPC is disabled 00:04:04.048 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:04.048 passed 00:04:04.048 00:04:04.048 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.048 suites 1 1 n/a 0 0 00:04:04.048 tests 2 2 2 0 0 00:04:04.048 asserts 497 497 497 0 n/a 00:04:04.048 00:04:04.048 Elapsed time = 1.334 seconds 00:04:04.048 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.048 EAL: request: mp_malloc_sync 00:04:04.048 EAL: No shared files mode enabled, IPC is disabled 00:04:04.048 EAL: Heap on socket 0 was shrunk by 2MB 00:04:04.048 EAL: No shared files mode enabled, IPC is disabled 00:04:04.048 EAL: No shared files mode enabled, IPC is disabled 00:04:04.048 EAL: No shared files mode enabled, IPC is disabled 00:04:04.048 00:04:04.048 real 0m1.470s 00:04:04.048 user 0m0.853s 00:04:04.048 sys 0m0.569s 00:04:04.048 03:51:58 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.048 03:51:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:04.048 ************************************ 00:04:04.048 END TEST env_vtophys 00:04:04.048 ************************************ 00:04:04.048 03:51:58 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:04.048 03:51:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.048 03:51:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.048 03:51:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.048 ************************************ 00:04:04.048 START TEST env_pci 00:04:04.048 ************************************ 00:04:04.048 03:51:58 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:04.048 00:04:04.048 00:04:04.048 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.048 http://cunit.sourceforge.net/ 00:04:04.048 00:04:04.048 00:04:04.048 Suite: pci 00:04:04.048 Test: pci_hook ...[2024-12-10 03:51:58.313685] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2263972 has claimed it 00:04:04.048 EAL: Cannot find device (10000:00:01.0) 00:04:04.048 EAL: Failed to attach device on primary process 00:04:04.048 passed 00:04:04.048 00:04:04.048 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.048 suites 1 1 n/a 0 0 00:04:04.048 tests 1 1 1 0 0 00:04:04.048 asserts 25 25 25 0 n/a 00:04:04.048 00:04:04.048 Elapsed time = 0.022 seconds 00:04:04.048 00:04:04.048 real 0m0.036s 00:04:04.048 user 0m0.011s 00:04:04.048 sys 0m0.024s 00:04:04.048 03:51:58 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.048 03:51:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:04.048 ************************************ 00:04:04.048 END TEST env_pci 00:04:04.048 ************************************ 00:04:04.048 03:51:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:04.048 03:51:58 env -- env/env.sh@15 -- # uname 00:04:04.048 03:51:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:04.048 03:51:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:04.048 03:51:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:04.048 03:51:58 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:04.048 03:51:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.048 03:51:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.048 ************************************ 00:04:04.048 START TEST env_dpdk_post_init 00:04:04.048 ************************************ 00:04:04.049 03:51:58 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:04.049 EAL: Detected CPU lcores: 48 00:04:04.049 EAL: Detected NUMA nodes: 2 00:04:04.049 EAL: Detected shared linkage of DPDK 00:04:04.049 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:04.049 EAL: Selected IOVA mode 'VA' 00:04:04.308 EAL: VFIO support initialized 00:04:04.308 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:04.308 EAL: Using IOMMU type 1 (Type 1) 00:04:04.308 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:04.308 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:04.308 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:04.308 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:04.308 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:04.308 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:04.308 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:04.308 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:04.308 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:04.308 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:04.308 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:04.308 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:04.308 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:04.308 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:04.308 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:04.308 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:05.244 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:08.519 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:08.519 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:08.519 Starting DPDK initialization... 00:04:08.519 Starting SPDK post initialization... 00:04:08.519 SPDK NVMe probe 00:04:08.519 Attaching to 0000:88:00.0 00:04:08.519 Attached to 0000:88:00.0 00:04:08.519 Cleaning up... 00:04:08.519 00:04:08.519 real 0m4.403s 00:04:08.519 user 0m3.026s 00:04:08.519 sys 0m0.435s 00:04:08.519 03:52:02 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.519 03:52:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:08.519 ************************************ 00:04:08.519 END TEST env_dpdk_post_init 00:04:08.519 ************************************ 00:04:08.519 03:52:02 env -- env/env.sh@26 -- # uname 00:04:08.519 03:52:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:08.519 03:52:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:08.519 03:52:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.519 03:52:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.519 03:52:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.519 ************************************ 00:04:08.519 START TEST env_mem_callbacks 00:04:08.519 ************************************ 00:04:08.519 03:52:02 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:08.519 EAL: Detected CPU lcores: 48 00:04:08.519 EAL: Detected NUMA nodes: 2 00:04:08.519 EAL: Detected shared linkage of DPDK 00:04:08.519 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:08.519 EAL: Selected IOVA mode 'VA' 00:04:08.519 EAL: VFIO support initialized 00:04:08.519 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:08.519 00:04:08.519 00:04:08.519 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.519 http://cunit.sourceforge.net/ 00:04:08.519 00:04:08.519 00:04:08.519 Suite: memory 00:04:08.519 Test: test ... 00:04:08.519 register 0x200000200000 2097152 00:04:08.519 malloc 3145728 00:04:08.519 register 0x200000400000 4194304 00:04:08.519 buf 0x200000500000 len 3145728 PASSED 00:04:08.519 malloc 64 00:04:08.519 buf 0x2000004fff40 len 64 PASSED 00:04:08.519 malloc 4194304 00:04:08.519 register 0x200000800000 6291456 00:04:08.519 buf 0x200000a00000 len 4194304 PASSED 00:04:08.519 free 0x200000500000 3145728 00:04:08.519 free 0x2000004fff40 64 00:04:08.519 unregister 0x200000400000 4194304 PASSED 00:04:08.519 free 0x200000a00000 4194304 00:04:08.519 unregister 0x200000800000 6291456 PASSED 00:04:08.519 malloc 8388608 00:04:08.519 register 0x200000400000 10485760 00:04:08.520 buf 0x200000600000 len 8388608 PASSED 00:04:08.520 free 0x200000600000 8388608 00:04:08.520 unregister 0x200000400000 10485760 PASSED 00:04:08.520 passed 00:04:08.520 00:04:08.520 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.520 suites 1 1 n/a 0 0 00:04:08.520 tests 1 1 1 0 0 00:04:08.520 asserts 15 15 15 0 n/a 00:04:08.520 00:04:08.520 Elapsed time = 0.004 seconds 00:04:08.520 00:04:08.520 real 0m0.048s 00:04:08.520 user 0m0.018s 00:04:08.520 sys 0m0.030s 00:04:08.520 03:52:02 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.520 03:52:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:08.520 ************************************ 00:04:08.520 END TEST env_mem_callbacks 00:04:08.520 ************************************ 00:04:08.778 00:04:08.778 real 0m6.508s 00:04:08.778 user 0m4.257s 00:04:08.778 sys 0m1.281s 00:04:08.778 03:52:02 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.778 03:52:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.778 ************************************ 00:04:08.778 END TEST env 00:04:08.778 ************************************ 00:04:08.778 03:52:02 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:08.778 03:52:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.778 03:52:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.778 03:52:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.778 ************************************ 00:04:08.778 START TEST rpc 00:04:08.778 ************************************ 00:04:08.778 03:52:02 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:08.778 * Looking for test storage... 00:04:08.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:08.778 03:52:03 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:08.778 03:52:03 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:08.778 03:52:03 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:08.778 03:52:03 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:08.778 03:52:03 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.778 03:52:03 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.778 03:52:03 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.778 03:52:03 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.778 03:52:03 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.778 03:52:03 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.778 03:52:03 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.778 03:52:03 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.778 03:52:03 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.778 03:52:03 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.778 03:52:03 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.778 03:52:03 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:08.778 03:52:03 rpc -- scripts/common.sh@345 -- # : 1 00:04:08.778 03:52:03 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.778 03:52:03 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.778 03:52:03 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:08.778 03:52:03 rpc -- scripts/common.sh@353 -- # local d=1 00:04:08.778 03:52:03 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.778 03:52:03 rpc -- scripts/common.sh@355 -- # echo 1 00:04:08.778 03:52:03 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.778 03:52:03 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:08.778 03:52:03 rpc -- scripts/common.sh@353 -- # local d=2 00:04:08.778 03:52:03 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.778 03:52:03 rpc -- scripts/common.sh@355 -- # echo 2 00:04:08.778 03:52:03 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.778 03:52:03 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.778 03:52:03 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.778 03:52:03 rpc -- scripts/common.sh@368 -- # return 0 00:04:08.778 03:52:03 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.778 03:52:03 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:08.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.778 --rc genhtml_branch_coverage=1 00:04:08.778 --rc genhtml_function_coverage=1 00:04:08.778 --rc genhtml_legend=1 00:04:08.778 --rc geninfo_all_blocks=1 00:04:08.778 --rc geninfo_unexecuted_blocks=1 00:04:08.778 00:04:08.778 ' 00:04:08.778 03:52:03 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:08.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.778 --rc genhtml_branch_coverage=1 00:04:08.778 --rc genhtml_function_coverage=1 00:04:08.778 --rc genhtml_legend=1 00:04:08.778 --rc geninfo_all_blocks=1 00:04:08.778 --rc geninfo_unexecuted_blocks=1 00:04:08.778 00:04:08.778 ' 00:04:08.778 03:52:03 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:08.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.778 --rc genhtml_branch_coverage=1 00:04:08.778 --rc genhtml_function_coverage=1 00:04:08.778 --rc genhtml_legend=1 00:04:08.778 --rc geninfo_all_blocks=1 00:04:08.778 --rc geninfo_unexecuted_blocks=1 00:04:08.778 00:04:08.778 ' 00:04:08.778 03:52:03 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:08.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.778 --rc genhtml_branch_coverage=1 00:04:08.778 --rc genhtml_function_coverage=1 00:04:08.778 --rc genhtml_legend=1 00:04:08.778 --rc geninfo_all_blocks=1 00:04:08.778 --rc geninfo_unexecuted_blocks=1 00:04:08.778 00:04:08.778 ' 00:04:08.778 03:52:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2264750 00:04:08.778 03:52:03 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:08.778 03:52:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.778 03:52:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2264750 00:04:08.778 03:52:03 rpc -- common/autotest_common.sh@835 -- # '[' -z 2264750 ']' 00:04:08.778 03:52:03 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.778 03:52:03 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:08.778 03:52:03 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.778 03:52:03 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:08.778 03:52:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.778 [2024-12-10 03:52:03.154087] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:08.778 [2024-12-10 03:52:03.154167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2264750 ] 00:04:09.037 [2024-12-10 03:52:03.219630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.037 [2024-12-10 03:52:03.276557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:09.037 [2024-12-10 03:52:03.276621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2264750' to capture a snapshot of events at runtime. 00:04:09.037 [2024-12-10 03:52:03.276649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:09.037 [2024-12-10 03:52:03.276660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:09.037 [2024-12-10 03:52:03.276669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2264750 for offline analysis/debug. 00:04:09.037 [2024-12-10 03:52:03.277292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.295 03:52:03 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:09.295 03:52:03 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:09.295 03:52:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:09.295 03:52:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:09.295 03:52:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:09.295 03:52:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:09.295 03:52:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.295 03:52:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.295 03:52:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.295 ************************************ 00:04:09.295 START TEST rpc_integrity 00:04:09.295 ************************************ 00:04:09.295 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:09.295 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:09.295 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.295 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.295 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.295 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:09.295 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:09.295 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.295 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.295 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.295 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.295 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.295 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:09.295 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.295 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.295 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.295 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.295 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.295 { 00:04:09.295 "name": "Malloc0", 00:04:09.295 "aliases": [ 00:04:09.295 "94c2c3ff-f7c4-4118-a69a-4f7a73974b14" 00:04:09.295 ], 00:04:09.295 "product_name": "Malloc disk", 00:04:09.295 "block_size": 512, 00:04:09.295 "num_blocks": 16384, 00:04:09.295 "uuid": "94c2c3ff-f7c4-4118-a69a-4f7a73974b14", 00:04:09.295 "assigned_rate_limits": { 00:04:09.295 "rw_ios_per_sec": 0, 00:04:09.295 "rw_mbytes_per_sec": 0, 00:04:09.295 "r_mbytes_per_sec": 0, 00:04:09.295 "w_mbytes_per_sec": 0 00:04:09.295 }, 00:04:09.295 "claimed": false, 00:04:09.295 "zoned": false, 00:04:09.295 "supported_io_types": { 00:04:09.295 "read": true, 00:04:09.295 "write": true, 00:04:09.295 "unmap": true, 00:04:09.295 "flush": true, 00:04:09.295 "reset": true, 00:04:09.295 "nvme_admin": false, 00:04:09.295 "nvme_io": false, 00:04:09.295 "nvme_io_md": false, 00:04:09.295 "write_zeroes": true, 00:04:09.295 "zcopy": true, 00:04:09.295 "get_zone_info": false, 00:04:09.295 "zone_management": false, 00:04:09.295 "zone_append": false, 00:04:09.295 "compare": false, 00:04:09.295 "compare_and_write": false, 00:04:09.295 "abort": true, 00:04:09.295 "seek_hole": false, 00:04:09.295 "seek_data": false, 00:04:09.295 "copy": true, 00:04:09.295 "nvme_iov_md": false 00:04:09.295 }, 00:04:09.295 "memory_domains": [ 00:04:09.295 { 00:04:09.295 "dma_device_id": "system", 00:04:09.295 "dma_device_type": 1 00:04:09.295 }, 00:04:09.295 { 00:04:09.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.295 "dma_device_type": 2 00:04:09.295 } 00:04:09.295 ], 00:04:09.295 "driver_specific": {} 00:04:09.295 } 00:04:09.295 ]' 00:04:09.295 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:09.295 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.295 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:09.296 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.296 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.554 [2024-12-10 03:52:03.682269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:09.554 [2024-12-10 03:52:03.682307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.554 [2024-12-10 03:52:03.682342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf01020 00:04:09.554 [2024-12-10 03:52:03.682355] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.554 [2024-12-10 03:52:03.683673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.554 [2024-12-10 03:52:03.683698] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.554 Passthru0 00:04:09.554 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.554 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.554 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.554 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.554 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.554 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.554 { 00:04:09.554 "name": "Malloc0", 00:04:09.554 "aliases": [ 00:04:09.554 "94c2c3ff-f7c4-4118-a69a-4f7a73974b14" 00:04:09.554 ], 00:04:09.554 "product_name": "Malloc disk", 00:04:09.554 "block_size": 512, 00:04:09.554 "num_blocks": 16384, 00:04:09.554 "uuid": "94c2c3ff-f7c4-4118-a69a-4f7a73974b14", 00:04:09.554 "assigned_rate_limits": { 00:04:09.554 "rw_ios_per_sec": 0, 00:04:09.554 "rw_mbytes_per_sec": 0, 00:04:09.554 "r_mbytes_per_sec": 0, 00:04:09.554 "w_mbytes_per_sec": 0 00:04:09.554 }, 00:04:09.554 "claimed": true, 00:04:09.554 "claim_type": "exclusive_write", 00:04:09.554 "zoned": false, 00:04:09.554 "supported_io_types": { 00:04:09.554 "read": true, 00:04:09.554 "write": true, 00:04:09.554 "unmap": true, 00:04:09.554 "flush": true, 00:04:09.554 "reset": true, 00:04:09.554 "nvme_admin": false, 00:04:09.554 "nvme_io": false, 00:04:09.554 "nvme_io_md": false, 00:04:09.554 "write_zeroes": true, 00:04:09.554 "zcopy": true, 00:04:09.554 "get_zone_info": false, 00:04:09.554 "zone_management": false, 00:04:09.554 "zone_append": false, 00:04:09.554 "compare": false, 00:04:09.554 "compare_and_write": false, 00:04:09.554 "abort": true, 00:04:09.554 "seek_hole": false, 00:04:09.554 "seek_data": false, 00:04:09.554 "copy": true, 00:04:09.554 "nvme_iov_md": false 00:04:09.554 }, 00:04:09.554 "memory_domains": [ 00:04:09.554 { 00:04:09.554 "dma_device_id": "system", 00:04:09.554 "dma_device_type": 1 00:04:09.554 }, 00:04:09.554 { 00:04:09.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.554 "dma_device_type": 2 00:04:09.554 } 00:04:09.554 ], 00:04:09.554 "driver_specific": {} 00:04:09.554 }, 00:04:09.554 { 00:04:09.554 "name": "Passthru0", 00:04:09.554 "aliases": [ 00:04:09.554 "c79e38e3-5dc9-58ef-90b5-631f453671b6" 00:04:09.554 ], 00:04:09.554 "product_name": "passthru", 00:04:09.554 "block_size": 512, 00:04:09.554 "num_blocks": 16384, 00:04:09.554 "uuid": "c79e38e3-5dc9-58ef-90b5-631f453671b6", 00:04:09.554 "assigned_rate_limits": { 00:04:09.554 "rw_ios_per_sec": 0, 00:04:09.554 "rw_mbytes_per_sec": 0, 00:04:09.554 "r_mbytes_per_sec": 0, 00:04:09.554 "w_mbytes_per_sec": 0 00:04:09.554 }, 00:04:09.554 "claimed": false, 00:04:09.554 "zoned": false, 00:04:09.554 "supported_io_types": { 00:04:09.554 "read": true, 00:04:09.554 "write": true, 00:04:09.554 "unmap": true, 00:04:09.554 "flush": true, 00:04:09.554 "reset": true, 00:04:09.554 "nvme_admin": false, 00:04:09.554 "nvme_io": false, 00:04:09.554 "nvme_io_md": false, 00:04:09.554 "write_zeroes": true, 00:04:09.554 "zcopy": true, 00:04:09.554 "get_zone_info": false, 00:04:09.554 "zone_management": false, 00:04:09.554 "zone_append": false, 00:04:09.554 "compare": false, 00:04:09.554 "compare_and_write": false, 00:04:09.554 "abort": true, 00:04:09.554 "seek_hole": false, 00:04:09.554 "seek_data": false, 00:04:09.554 "copy": true, 00:04:09.554 "nvme_iov_md": false 00:04:09.554 }, 00:04:09.554 "memory_domains": [ 00:04:09.554 { 00:04:09.554 "dma_device_id": "system", 00:04:09.554 "dma_device_type": 1 00:04:09.554 }, 00:04:09.554 { 00:04:09.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.554 "dma_device_type": 2 00:04:09.554 } 00:04:09.554 ], 00:04:09.554 "driver_specific": { 00:04:09.554 "passthru": { 00:04:09.554 "name": "Passthru0", 00:04:09.554 "base_bdev_name": "Malloc0" 00:04:09.554 } 00:04:09.554 } 00:04:09.554 } 00:04:09.554 ]' 00:04:09.554 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:09.554 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.554 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.554 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.554 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.554 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.554 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:09.554 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.554 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.554 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.554 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.554 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.554 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.554 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.554 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.554 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:09.554 03:52:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.554 00:04:09.554 real 0m0.220s 00:04:09.554 user 0m0.138s 00:04:09.554 sys 0m0.022s 00:04:09.554 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.554 03:52:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.554 ************************************ 00:04:09.554 END TEST rpc_integrity 00:04:09.554 ************************************ 00:04:09.554 03:52:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:09.554 03:52:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.554 03:52:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.554 03:52:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.554 ************************************ 00:04:09.554 START TEST rpc_plugins 00:04:09.554 ************************************ 00:04:09.554 03:52:03 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:09.554 03:52:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:09.554 03:52:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.554 03:52:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.554 03:52:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.554 03:52:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:09.554 03:52:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:09.554 03:52:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.554 03:52:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.554 03:52:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.554 03:52:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:09.554 { 00:04:09.554 "name": "Malloc1", 00:04:09.554 "aliases": [ 00:04:09.554 "e7bd7e0c-c9bf-4f06-942e-166b15b35276" 00:04:09.554 ], 00:04:09.554 "product_name": "Malloc disk", 00:04:09.554 "block_size": 4096, 00:04:09.554 "num_blocks": 256, 00:04:09.554 "uuid": "e7bd7e0c-c9bf-4f06-942e-166b15b35276", 00:04:09.554 "assigned_rate_limits": { 00:04:09.554 "rw_ios_per_sec": 0, 00:04:09.554 "rw_mbytes_per_sec": 0, 00:04:09.554 "r_mbytes_per_sec": 0, 00:04:09.554 "w_mbytes_per_sec": 0 00:04:09.554 }, 00:04:09.554 "claimed": false, 00:04:09.554 "zoned": false, 00:04:09.554 "supported_io_types": { 00:04:09.554 "read": true, 00:04:09.554 "write": true, 00:04:09.554 "unmap": true, 00:04:09.554 "flush": true, 00:04:09.554 "reset": true, 00:04:09.554 "nvme_admin": false, 00:04:09.554 "nvme_io": false, 00:04:09.554 "nvme_io_md": false, 00:04:09.554 "write_zeroes": true, 00:04:09.554 "zcopy": true, 00:04:09.554 "get_zone_info": false, 00:04:09.554 "zone_management": false, 00:04:09.554 "zone_append": false, 00:04:09.554 "compare": false, 00:04:09.554 "compare_and_write": false, 00:04:09.554 "abort": true, 00:04:09.554 "seek_hole": false, 00:04:09.554 "seek_data": false, 00:04:09.554 "copy": true, 00:04:09.554 "nvme_iov_md": false 00:04:09.554 }, 00:04:09.554 "memory_domains": [ 00:04:09.554 { 00:04:09.554 "dma_device_id": "system", 00:04:09.554 "dma_device_type": 1 00:04:09.555 }, 00:04:09.555 { 00:04:09.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.555 "dma_device_type": 2 00:04:09.555 } 00:04:09.555 ], 00:04:09.555 "driver_specific": {} 00:04:09.555 } 00:04:09.555 ]' 00:04:09.555 03:52:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:09.555 03:52:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:09.555 03:52:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:09.555 03:52:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.555 03:52:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.555 03:52:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.555 03:52:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:09.555 03:52:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.555 03:52:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.555 03:52:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.555 03:52:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:09.555 03:52:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:09.813 03:52:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:09.813 00:04:09.813 real 0m0.106s 00:04:09.813 user 0m0.068s 00:04:09.813 sys 0m0.008s 00:04:09.813 03:52:03 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.813 03:52:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.813 ************************************ 00:04:09.813 END TEST rpc_plugins 00:04:09.813 ************************************ 00:04:09.813 03:52:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:09.813 03:52:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.813 03:52:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.813 03:52:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.813 ************************************ 00:04:09.813 START TEST rpc_trace_cmd_test 00:04:09.813 ************************************ 00:04:09.813 03:52:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:09.813 03:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:09.813 03:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:09.813 03:52:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.813 03:52:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:09.813 03:52:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.813 03:52:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:09.813 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2264750", 00:04:09.813 "tpoint_group_mask": "0x8", 00:04:09.813 "iscsi_conn": { 00:04:09.813 "mask": "0x2", 00:04:09.813 "tpoint_mask": "0x0" 00:04:09.813 }, 00:04:09.813 "scsi": { 00:04:09.813 "mask": "0x4", 00:04:09.813 "tpoint_mask": "0x0" 00:04:09.813 }, 00:04:09.813 "bdev": { 00:04:09.813 "mask": "0x8", 00:04:09.813 "tpoint_mask": "0xffffffffffffffff" 00:04:09.813 }, 00:04:09.813 "nvmf_rdma": { 00:04:09.813 "mask": "0x10", 00:04:09.813 "tpoint_mask": "0x0" 00:04:09.813 }, 00:04:09.813 "nvmf_tcp": { 00:04:09.813 "mask": "0x20", 00:04:09.813 "tpoint_mask": "0x0" 00:04:09.813 }, 00:04:09.813 "ftl": { 00:04:09.813 "mask": "0x40", 00:04:09.813 "tpoint_mask": "0x0" 00:04:09.813 }, 00:04:09.813 "blobfs": { 00:04:09.813 "mask": "0x80", 00:04:09.813 "tpoint_mask": "0x0" 00:04:09.813 }, 00:04:09.813 "dsa": { 00:04:09.813 "mask": "0x200", 00:04:09.813 "tpoint_mask": "0x0" 00:04:09.813 }, 00:04:09.813 "thread": { 00:04:09.813 "mask": "0x400", 00:04:09.813 "tpoint_mask": "0x0" 00:04:09.813 }, 00:04:09.813 "nvme_pcie": { 00:04:09.813 "mask": "0x800", 00:04:09.813 "tpoint_mask": "0x0" 00:04:09.813 }, 00:04:09.813 "iaa": { 00:04:09.813 "mask": "0x1000", 00:04:09.813 "tpoint_mask": "0x0" 00:04:09.813 }, 00:04:09.813 "nvme_tcp": { 00:04:09.813 "mask": "0x2000", 00:04:09.813 "tpoint_mask": "0x0" 00:04:09.813 }, 00:04:09.813 "bdev_nvme": { 00:04:09.813 "mask": "0x4000", 00:04:09.813 "tpoint_mask": "0x0" 00:04:09.813 }, 00:04:09.813 "sock": { 00:04:09.813 "mask": "0x8000", 00:04:09.813 "tpoint_mask": "0x0" 00:04:09.813 }, 00:04:09.813 "blob": { 00:04:09.813 "mask": "0x10000", 00:04:09.813 "tpoint_mask": "0x0" 00:04:09.813 }, 00:04:09.813 "bdev_raid": { 00:04:09.813 "mask": "0x20000", 00:04:09.813 "tpoint_mask": "0x0" 00:04:09.813 }, 00:04:09.813 "scheduler": { 00:04:09.813 "mask": "0x40000", 00:04:09.813 "tpoint_mask": "0x0" 00:04:09.813 } 00:04:09.813 }' 00:04:09.813 03:52:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:09.813 03:52:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:09.813 03:52:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:09.813 03:52:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:09.813 03:52:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:09.813 03:52:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:09.813 03:52:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:09.813 03:52:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:09.813 03:52:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:09.813 03:52:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:09.813 00:04:09.813 real 0m0.181s 00:04:09.813 user 0m0.155s 00:04:09.813 sys 0m0.018s 00:04:09.813 03:52:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.813 03:52:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:09.813 ************************************ 00:04:09.813 END TEST rpc_trace_cmd_test 00:04:09.813 ************************************ 00:04:09.813 03:52:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:09.813 03:52:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:09.813 03:52:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:09.813 03:52:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.813 03:52:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.813 03:52:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.072 ************************************ 00:04:10.072 START TEST rpc_daemon_integrity 00:04:10.072 ************************************ 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:10.072 { 00:04:10.072 "name": "Malloc2", 00:04:10.072 "aliases": [ 00:04:10.072 "8fa25601-5b85-4a39-a7ad-b836d1ca265a" 00:04:10.072 ], 00:04:10.072 "product_name": "Malloc disk", 00:04:10.072 "block_size": 512, 00:04:10.072 "num_blocks": 16384, 00:04:10.072 "uuid": "8fa25601-5b85-4a39-a7ad-b836d1ca265a", 00:04:10.072 "assigned_rate_limits": { 00:04:10.072 "rw_ios_per_sec": 0, 00:04:10.072 "rw_mbytes_per_sec": 0, 00:04:10.072 "r_mbytes_per_sec": 0, 00:04:10.072 "w_mbytes_per_sec": 0 00:04:10.072 }, 00:04:10.072 "claimed": false, 00:04:10.072 "zoned": false, 00:04:10.072 "supported_io_types": { 00:04:10.072 "read": true, 00:04:10.072 "write": true, 00:04:10.072 "unmap": true, 00:04:10.072 "flush": true, 00:04:10.072 "reset": true, 00:04:10.072 "nvme_admin": false, 00:04:10.072 "nvme_io": false, 00:04:10.072 "nvme_io_md": false, 00:04:10.072 "write_zeroes": true, 00:04:10.072 "zcopy": true, 00:04:10.072 "get_zone_info": false, 00:04:10.072 "zone_management": false, 00:04:10.072 "zone_append": false, 00:04:10.072 "compare": false, 00:04:10.072 "compare_and_write": false, 00:04:10.072 "abort": true, 00:04:10.072 "seek_hole": false, 00:04:10.072 "seek_data": false, 00:04:10.072 "copy": true, 00:04:10.072 "nvme_iov_md": false 00:04:10.072 }, 00:04:10.072 "memory_domains": [ 00:04:10.072 { 00:04:10.072 "dma_device_id": "system", 00:04:10.072 "dma_device_type": 1 00:04:10.072 }, 00:04:10.072 { 00:04:10.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.072 "dma_device_type": 2 00:04:10.072 } 00:04:10.072 ], 00:04:10.072 "driver_specific": {} 00:04:10.072 } 00:04:10.072 ]' 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.072 [2024-12-10 03:52:04.316576] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:10.072 [2024-12-10 03:52:04.316621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:10.072 [2024-12-10 03:52:04.316642] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe50320 00:04:10.072 [2024-12-10 03:52:04.316663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:10.072 [2024-12-10 03:52:04.317913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:10.072 [2024-12-10 03:52:04.317935] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:10.072 Passthru0 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:10.072 { 00:04:10.072 "name": "Malloc2", 00:04:10.072 "aliases": [ 00:04:10.072 "8fa25601-5b85-4a39-a7ad-b836d1ca265a" 00:04:10.072 ], 00:04:10.072 "product_name": "Malloc disk", 00:04:10.072 "block_size": 512, 00:04:10.072 "num_blocks": 16384, 00:04:10.072 "uuid": "8fa25601-5b85-4a39-a7ad-b836d1ca265a", 00:04:10.072 "assigned_rate_limits": { 00:04:10.072 "rw_ios_per_sec": 0, 00:04:10.072 "rw_mbytes_per_sec": 0, 00:04:10.072 "r_mbytes_per_sec": 0, 00:04:10.072 "w_mbytes_per_sec": 0 00:04:10.072 }, 00:04:10.072 "claimed": true, 00:04:10.072 "claim_type": "exclusive_write", 00:04:10.072 "zoned": false, 00:04:10.072 "supported_io_types": { 00:04:10.072 "read": true, 00:04:10.072 "write": true, 00:04:10.072 "unmap": true, 00:04:10.072 "flush": true, 00:04:10.072 "reset": true, 00:04:10.072 "nvme_admin": false, 00:04:10.072 "nvme_io": false, 00:04:10.072 "nvme_io_md": false, 00:04:10.072 "write_zeroes": true, 00:04:10.072 "zcopy": true, 00:04:10.072 "get_zone_info": false, 00:04:10.072 "zone_management": false, 00:04:10.072 "zone_append": false, 00:04:10.072 "compare": false, 00:04:10.072 "compare_and_write": false, 00:04:10.072 "abort": true, 00:04:10.072 "seek_hole": false, 00:04:10.072 "seek_data": false, 00:04:10.072 "copy": true, 00:04:10.072 "nvme_iov_md": false 00:04:10.072 }, 00:04:10.072 "memory_domains": [ 00:04:10.072 { 00:04:10.072 "dma_device_id": "system", 00:04:10.072 "dma_device_type": 1 00:04:10.072 }, 00:04:10.072 { 00:04:10.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.072 "dma_device_type": 2 00:04:10.072 } 00:04:10.072 ], 00:04:10.072 "driver_specific": {} 00:04:10.072 }, 00:04:10.072 { 00:04:10.072 "name": "Passthru0", 00:04:10.072 "aliases": [ 00:04:10.072 "9b136bce-4ebb-5d3a-ab22-b0ed29f304f6" 00:04:10.072 ], 00:04:10.072 "product_name": "passthru", 00:04:10.072 "block_size": 512, 00:04:10.072 "num_blocks": 16384, 00:04:10.072 "uuid": "9b136bce-4ebb-5d3a-ab22-b0ed29f304f6", 00:04:10.072 "assigned_rate_limits": { 00:04:10.072 "rw_ios_per_sec": 0, 00:04:10.072 "rw_mbytes_per_sec": 0, 00:04:10.072 "r_mbytes_per_sec": 0, 00:04:10.072 "w_mbytes_per_sec": 0 00:04:10.072 }, 00:04:10.072 "claimed": false, 00:04:10.072 "zoned": false, 00:04:10.072 "supported_io_types": { 00:04:10.072 "read": true, 00:04:10.072 "write": true, 00:04:10.072 "unmap": true, 00:04:10.072 "flush": true, 00:04:10.072 "reset": true, 00:04:10.072 "nvme_admin": false, 00:04:10.072 "nvme_io": false, 00:04:10.072 "nvme_io_md": false, 00:04:10.072 "write_zeroes": true, 00:04:10.072 "zcopy": true, 00:04:10.072 "get_zone_info": false, 00:04:10.072 "zone_management": false, 00:04:10.072 "zone_append": false, 00:04:10.072 "compare": false, 00:04:10.072 "compare_and_write": false, 00:04:10.072 "abort": true, 00:04:10.072 "seek_hole": false, 00:04:10.072 "seek_data": false, 00:04:10.072 "copy": true, 00:04:10.072 "nvme_iov_md": false 00:04:10.072 }, 00:04:10.072 "memory_domains": [ 00:04:10.072 { 00:04:10.072 "dma_device_id": "system", 00:04:10.072 "dma_device_type": 1 00:04:10.072 }, 00:04:10.072 { 00:04:10.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.072 "dma_device_type": 2 00:04:10.072 } 00:04:10.072 ], 00:04:10.072 "driver_specific": { 00:04:10.072 "passthru": { 00:04:10.072 "name": "Passthru0", 00:04:10.072 "base_bdev_name": "Malloc2" 00:04:10.072 } 00:04:10.072 } 00:04:10.072 } 00:04:10.072 ]' 00:04:10.072 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:10.073 00:04:10.073 real 0m0.212s 00:04:10.073 user 0m0.133s 00:04:10.073 sys 0m0.022s 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.073 03:52:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.073 ************************************ 00:04:10.073 END TEST rpc_daemon_integrity 00:04:10.073 ************************************ 00:04:10.073 03:52:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:10.073 03:52:04 rpc -- rpc/rpc.sh@84 -- # killprocess 2264750 00:04:10.073 03:52:04 rpc -- common/autotest_common.sh@954 -- # '[' -z 2264750 ']' 00:04:10.073 03:52:04 rpc -- common/autotest_common.sh@958 -- # kill -0 2264750 00:04:10.073 03:52:04 rpc -- common/autotest_common.sh@959 -- # uname 00:04:10.073 03:52:04 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.073 03:52:04 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2264750 00:04:10.330 03:52:04 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.330 03:52:04 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.330 03:52:04 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2264750' 00:04:10.330 killing process with pid 2264750 00:04:10.330 03:52:04 rpc -- common/autotest_common.sh@973 -- # kill 2264750 00:04:10.330 03:52:04 rpc -- common/autotest_common.sh@978 -- # wait 2264750 00:04:10.589 00:04:10.589 real 0m1.933s 00:04:10.589 user 0m2.367s 00:04:10.589 sys 0m0.629s 00:04:10.589 03:52:04 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.589 03:52:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.589 ************************************ 00:04:10.589 END TEST rpc 00:04:10.589 ************************************ 00:04:10.589 03:52:04 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:10.589 03:52:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.589 03:52:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.589 03:52:04 -- common/autotest_common.sh@10 -- # set +x 00:04:10.589 ************************************ 00:04:10.589 START TEST skip_rpc 00:04:10.589 ************************************ 00:04:10.589 03:52:04 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:10.848 * Looking for test storage... 00:04:10.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:10.848 03:52:05 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:10.848 03:52:05 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:10.848 03:52:05 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:10.848 03:52:05 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.848 03:52:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:10.848 03:52:05 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.848 03:52:05 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:10.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.848 --rc genhtml_branch_coverage=1 00:04:10.848 --rc genhtml_function_coverage=1 00:04:10.848 --rc genhtml_legend=1 00:04:10.848 --rc geninfo_all_blocks=1 00:04:10.848 --rc geninfo_unexecuted_blocks=1 00:04:10.848 00:04:10.848 ' 00:04:10.848 03:52:05 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:10.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.848 --rc genhtml_branch_coverage=1 00:04:10.848 --rc genhtml_function_coverage=1 00:04:10.848 --rc genhtml_legend=1 00:04:10.848 --rc geninfo_all_blocks=1 00:04:10.848 --rc geninfo_unexecuted_blocks=1 00:04:10.848 00:04:10.848 ' 00:04:10.848 03:52:05 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:10.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.848 --rc genhtml_branch_coverage=1 00:04:10.848 --rc genhtml_function_coverage=1 00:04:10.848 --rc genhtml_legend=1 00:04:10.848 --rc geninfo_all_blocks=1 00:04:10.848 --rc geninfo_unexecuted_blocks=1 00:04:10.848 00:04:10.848 ' 00:04:10.848 03:52:05 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:10.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.848 --rc genhtml_branch_coverage=1 00:04:10.848 --rc genhtml_function_coverage=1 00:04:10.848 --rc genhtml_legend=1 00:04:10.848 --rc geninfo_all_blocks=1 00:04:10.848 --rc geninfo_unexecuted_blocks=1 00:04:10.848 00:04:10.848 ' 00:04:10.848 03:52:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:10.848 03:52:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:10.848 03:52:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:10.848 03:52:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.848 03:52:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.848 03:52:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.848 ************************************ 00:04:10.848 START TEST skip_rpc 00:04:10.848 ************************************ 00:04:10.848 03:52:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:10.848 03:52:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2265082 00:04:10.848 03:52:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:10.848 03:52:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.848 03:52:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:10.848 [2024-12-10 03:52:05.180401] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:10.848 [2024-12-10 03:52:05.180480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265082 ] 00:04:11.106 [2024-12-10 03:52:05.246608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.106 [2024-12-10 03:52:05.303937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2265082 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2265082 ']' 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2265082 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2265082 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2265082' 00:04:16.402 killing process with pid 2265082 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2265082 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2265082 00:04:16.402 00:04:16.402 real 0m5.451s 00:04:16.402 user 0m5.141s 00:04:16.402 sys 0m0.318s 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.402 03:52:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.402 ************************************ 00:04:16.402 END TEST skip_rpc 00:04:16.402 ************************************ 00:04:16.403 03:52:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:16.403 03:52:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.403 03:52:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.403 03:52:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.403 ************************************ 00:04:16.403 START TEST skip_rpc_with_json 00:04:16.403 ************************************ 00:04:16.403 03:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:16.403 03:52:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:16.403 03:52:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2265775 00:04:16.403 03:52:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:16.403 03:52:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.403 03:52:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2265775 00:04:16.403 03:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2265775 ']' 00:04:16.403 03:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.403 03:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:16.403 03:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.403 03:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:16.403 03:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.403 [2024-12-10 03:52:10.677101] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:16.403 [2024-12-10 03:52:10.677206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265775 ] 00:04:16.403 [2024-12-10 03:52:10.744153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.686 [2024-12-10 03:52:10.807327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.686 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.686 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:16.686 03:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:16.944 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.944 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.944 [2024-12-10 03:52:11.074398] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:16.944 request: 00:04:16.944 { 00:04:16.944 "trtype": "tcp", 00:04:16.944 "method": "nvmf_get_transports", 00:04:16.944 "req_id": 1 00:04:16.944 } 00:04:16.944 Got JSON-RPC error response 00:04:16.944 response: 00:04:16.944 { 00:04:16.944 "code": -19, 00:04:16.944 "message": "No such device" 00:04:16.944 } 00:04:16.944 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:16.944 03:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:16.944 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.944 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.944 [2024-12-10 03:52:11.082516] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.944 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.944 03:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:16.944 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.944 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.944 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.944 03:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:16.944 { 00:04:16.944 "subsystems": [ 00:04:16.944 { 00:04:16.944 "subsystem": "fsdev", 00:04:16.944 "config": [ 00:04:16.944 { 00:04:16.944 "method": "fsdev_set_opts", 00:04:16.944 "params": { 00:04:16.944 "fsdev_io_pool_size": 65535, 00:04:16.944 "fsdev_io_cache_size": 256 00:04:16.944 } 00:04:16.944 } 00:04:16.944 ] 00:04:16.944 }, 00:04:16.944 { 00:04:16.944 "subsystem": "vfio_user_target", 00:04:16.944 "config": null 00:04:16.944 }, 00:04:16.944 { 00:04:16.944 "subsystem": "keyring", 00:04:16.944 "config": [] 00:04:16.944 }, 00:04:16.944 { 00:04:16.944 "subsystem": "iobuf", 00:04:16.944 "config": [ 00:04:16.944 { 00:04:16.944 "method": "iobuf_set_options", 00:04:16.944 "params": { 00:04:16.944 "small_pool_count": 8192, 00:04:16.944 "large_pool_count": 1024, 00:04:16.944 "small_bufsize": 8192, 00:04:16.944 "large_bufsize": 135168, 00:04:16.944 "enable_numa": false 00:04:16.944 } 00:04:16.944 } 00:04:16.944 ] 00:04:16.944 }, 00:04:16.944 { 00:04:16.944 "subsystem": "sock", 00:04:16.944 "config": [ 00:04:16.944 { 00:04:16.944 "method": "sock_set_default_impl", 00:04:16.944 "params": { 00:04:16.944 "impl_name": "posix" 00:04:16.944 } 00:04:16.944 }, 00:04:16.944 { 00:04:16.944 "method": "sock_impl_set_options", 00:04:16.944 "params": { 00:04:16.944 "impl_name": "ssl", 00:04:16.944 "recv_buf_size": 4096, 00:04:16.944 "send_buf_size": 4096, 00:04:16.944 "enable_recv_pipe": true, 00:04:16.944 "enable_quickack": false, 00:04:16.944 "enable_placement_id": 0, 00:04:16.944 "enable_zerocopy_send_server": true, 00:04:16.944 "enable_zerocopy_send_client": false, 00:04:16.944 "zerocopy_threshold": 0, 00:04:16.944 "tls_version": 0, 00:04:16.944 "enable_ktls": false 00:04:16.944 } 00:04:16.944 }, 00:04:16.944 { 00:04:16.944 "method": "sock_impl_set_options", 00:04:16.944 "params": { 00:04:16.944 "impl_name": "posix", 00:04:16.944 "recv_buf_size": 2097152, 00:04:16.944 "send_buf_size": 2097152, 00:04:16.944 "enable_recv_pipe": true, 00:04:16.944 "enable_quickack": false, 00:04:16.944 "enable_placement_id": 0, 00:04:16.944 "enable_zerocopy_send_server": true, 00:04:16.944 "enable_zerocopy_send_client": false, 00:04:16.944 "zerocopy_threshold": 0, 00:04:16.944 "tls_version": 0, 00:04:16.944 "enable_ktls": false 00:04:16.944 } 00:04:16.944 } 00:04:16.944 ] 00:04:16.944 }, 00:04:16.944 { 00:04:16.944 "subsystem": "vmd", 00:04:16.944 "config": [] 00:04:16.944 }, 00:04:16.944 { 00:04:16.944 "subsystem": "accel", 00:04:16.944 "config": [ 00:04:16.944 { 00:04:16.944 "method": "accel_set_options", 00:04:16.944 "params": { 00:04:16.944 "small_cache_size": 128, 00:04:16.944 "large_cache_size": 16, 00:04:16.944 "task_count": 2048, 00:04:16.944 "sequence_count": 2048, 00:04:16.944 "buf_count": 2048 00:04:16.944 } 00:04:16.944 } 00:04:16.944 ] 00:04:16.944 }, 00:04:16.944 { 00:04:16.944 "subsystem": "bdev", 00:04:16.944 "config": [ 00:04:16.944 { 00:04:16.944 "method": "bdev_set_options", 00:04:16.944 "params": { 00:04:16.944 "bdev_io_pool_size": 65535, 00:04:16.944 "bdev_io_cache_size": 256, 00:04:16.944 "bdev_auto_examine": true, 00:04:16.944 "iobuf_small_cache_size": 128, 00:04:16.944 "iobuf_large_cache_size": 16 00:04:16.944 } 00:04:16.944 }, 00:04:16.944 { 00:04:16.944 "method": "bdev_raid_set_options", 00:04:16.944 "params": { 00:04:16.944 "process_window_size_kb": 1024, 00:04:16.944 "process_max_bandwidth_mb_sec": 0 00:04:16.944 } 00:04:16.944 }, 00:04:16.944 { 00:04:16.945 "method": "bdev_iscsi_set_options", 00:04:16.945 "params": { 00:04:16.945 "timeout_sec": 30 00:04:16.945 } 00:04:16.945 }, 00:04:16.945 { 00:04:16.945 "method": "bdev_nvme_set_options", 00:04:16.945 "params": { 00:04:16.945 "action_on_timeout": "none", 00:04:16.945 "timeout_us": 0, 00:04:16.945 "timeout_admin_us": 0, 00:04:16.945 "keep_alive_timeout_ms": 10000, 00:04:16.945 "arbitration_burst": 0, 00:04:16.945 "low_priority_weight": 0, 00:04:16.945 "medium_priority_weight": 0, 00:04:16.945 "high_priority_weight": 0, 00:04:16.945 "nvme_adminq_poll_period_us": 10000, 00:04:16.945 "nvme_ioq_poll_period_us": 0, 00:04:16.945 "io_queue_requests": 0, 00:04:16.945 "delay_cmd_submit": true, 00:04:16.945 "transport_retry_count": 4, 00:04:16.945 "bdev_retry_count": 3, 00:04:16.945 "transport_ack_timeout": 0, 00:04:16.945 "ctrlr_loss_timeout_sec": 0, 00:04:16.945 "reconnect_delay_sec": 0, 00:04:16.945 "fast_io_fail_timeout_sec": 0, 00:04:16.945 "disable_auto_failback": false, 00:04:16.945 "generate_uuids": false, 00:04:16.945 "transport_tos": 0, 00:04:16.945 "nvme_error_stat": false, 00:04:16.945 "rdma_srq_size": 0, 00:04:16.945 "io_path_stat": false, 00:04:16.945 "allow_accel_sequence": false, 00:04:16.945 "rdma_max_cq_size": 0, 00:04:16.945 "rdma_cm_event_timeout_ms": 0, 00:04:16.945 "dhchap_digests": [ 00:04:16.945 "sha256", 00:04:16.945 "sha384", 00:04:16.945 "sha512" 00:04:16.945 ], 00:04:16.945 "dhchap_dhgroups": [ 00:04:16.945 "null", 00:04:16.945 "ffdhe2048", 00:04:16.945 "ffdhe3072", 00:04:16.945 "ffdhe4096", 00:04:16.945 "ffdhe6144", 00:04:16.945 "ffdhe8192" 00:04:16.945 ] 00:04:16.945 } 00:04:16.945 }, 00:04:16.945 { 00:04:16.945 "method": "bdev_nvme_set_hotplug", 00:04:16.945 "params": { 00:04:16.945 "period_us": 100000, 00:04:16.945 "enable": false 00:04:16.945 } 00:04:16.945 }, 00:04:16.945 { 00:04:16.945 "method": "bdev_wait_for_examine" 00:04:16.945 } 00:04:16.945 ] 00:04:16.945 }, 00:04:16.945 { 00:04:16.945 "subsystem": "scsi", 00:04:16.945 "config": null 00:04:16.945 }, 00:04:16.945 { 00:04:16.945 "subsystem": "scheduler", 00:04:16.945 "config": [ 00:04:16.945 { 00:04:16.945 "method": "framework_set_scheduler", 00:04:16.945 "params": { 00:04:16.945 "name": "static" 00:04:16.945 } 00:04:16.945 } 00:04:16.945 ] 00:04:16.945 }, 00:04:16.945 { 00:04:16.945 "subsystem": "vhost_scsi", 00:04:16.945 "config": [] 00:04:16.945 }, 00:04:16.945 { 00:04:16.945 "subsystem": "vhost_blk", 00:04:16.945 "config": [] 00:04:16.945 }, 00:04:16.945 { 00:04:16.945 "subsystem": "ublk", 00:04:16.945 "config": [] 00:04:16.945 }, 00:04:16.945 { 00:04:16.945 "subsystem": "nbd", 00:04:16.945 "config": [] 00:04:16.945 }, 00:04:16.945 { 00:04:16.945 "subsystem": "nvmf", 00:04:16.945 "config": [ 00:04:16.945 { 00:04:16.945 "method": "nvmf_set_config", 00:04:16.945 "params": { 00:04:16.945 "discovery_filter": "match_any", 00:04:16.945 "admin_cmd_passthru": { 00:04:16.945 "identify_ctrlr": false 00:04:16.945 }, 00:04:16.945 "dhchap_digests": [ 00:04:16.945 "sha256", 00:04:16.945 "sha384", 00:04:16.945 "sha512" 00:04:16.945 ], 00:04:16.945 "dhchap_dhgroups": [ 00:04:16.945 "null", 00:04:16.945 "ffdhe2048", 00:04:16.945 "ffdhe3072", 00:04:16.945 "ffdhe4096", 00:04:16.945 "ffdhe6144", 00:04:16.945 "ffdhe8192" 00:04:16.945 ] 00:04:16.945 } 00:04:16.945 }, 00:04:16.945 { 00:04:16.945 "method": "nvmf_set_max_subsystems", 00:04:16.945 "params": { 00:04:16.945 "max_subsystems": 1024 00:04:16.945 } 00:04:16.945 }, 00:04:16.945 { 00:04:16.945 "method": "nvmf_set_crdt", 00:04:16.945 "params": { 00:04:16.945 "crdt1": 0, 00:04:16.945 "crdt2": 0, 00:04:16.945 "crdt3": 0 00:04:16.945 } 00:04:16.945 }, 00:04:16.945 { 00:04:16.945 "method": "nvmf_create_transport", 00:04:16.945 "params": { 00:04:16.945 "trtype": "TCP", 00:04:16.945 "max_queue_depth": 128, 00:04:16.945 "max_io_qpairs_per_ctrlr": 127, 00:04:16.945 "in_capsule_data_size": 4096, 00:04:16.945 "max_io_size": 131072, 00:04:16.945 "io_unit_size": 131072, 00:04:16.945 "max_aq_depth": 128, 00:04:16.945 "num_shared_buffers": 511, 00:04:16.945 "buf_cache_size": 4294967295, 00:04:16.945 "dif_insert_or_strip": false, 00:04:16.945 "zcopy": false, 00:04:16.945 "c2h_success": true, 00:04:16.945 "sock_priority": 0, 00:04:16.945 "abort_timeout_sec": 1, 00:04:16.945 "ack_timeout": 0, 00:04:16.945 "data_wr_pool_size": 0 00:04:16.945 } 00:04:16.945 } 00:04:16.945 ] 00:04:16.945 }, 00:04:16.945 { 00:04:16.945 "subsystem": "iscsi", 00:04:16.945 "config": [ 00:04:16.945 { 00:04:16.945 "method": "iscsi_set_options", 00:04:16.945 "params": { 00:04:16.945 "node_base": "iqn.2016-06.io.spdk", 00:04:16.945 "max_sessions": 128, 00:04:16.945 "max_connections_per_session": 2, 00:04:16.945 "max_queue_depth": 64, 00:04:16.945 "default_time2wait": 2, 00:04:16.945 "default_time2retain": 20, 00:04:16.945 "first_burst_length": 8192, 00:04:16.945 "immediate_data": true, 00:04:16.945 "allow_duplicated_isid": false, 00:04:16.945 "error_recovery_level": 0, 00:04:16.945 "nop_timeout": 60, 00:04:16.945 "nop_in_interval": 30, 00:04:16.945 "disable_chap": false, 00:04:16.945 "require_chap": false, 00:04:16.945 "mutual_chap": false, 00:04:16.945 "chap_group": 0, 00:04:16.945 "max_large_datain_per_connection": 64, 00:04:16.945 "max_r2t_per_connection": 4, 00:04:16.945 "pdu_pool_size": 36864, 00:04:16.945 "immediate_data_pool_size": 16384, 00:04:16.945 "data_out_pool_size": 2048 00:04:16.945 } 00:04:16.945 } 00:04:16.945 ] 00:04:16.945 } 00:04:16.945 ] 00:04:16.945 } 00:04:16.945 03:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:16.945 03:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2265775 00:04:16.945 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2265775 ']' 00:04:16.945 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2265775 00:04:16.945 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:16.945 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.945 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2265775 00:04:16.945 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.945 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.945 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2265775' 00:04:16.945 killing process with pid 2265775 00:04:16.945 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2265775 00:04:16.945 03:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2265775 00:04:17.513 03:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2265919 00:04:17.513 03:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:17.513 03:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:22.777 03:52:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2265919 00:04:22.777 03:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2265919 ']' 00:04:22.777 03:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2265919 00:04:22.777 03:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:22.777 03:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.777 03:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2265919 00:04:22.777 03:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.777 03:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.777 03:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2265919' 00:04:22.777 killing process with pid 2265919 00:04:22.777 03:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2265919 00:04:22.777 03:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2265919 00:04:22.777 03:52:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:22.777 03:52:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:23.036 00:04:23.036 real 0m6.542s 00:04:23.036 user 0m6.215s 00:04:23.036 sys 0m0.653s 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.036 ************************************ 00:04:23.036 END TEST skip_rpc_with_json 00:04:23.036 ************************************ 00:04:23.036 03:52:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:23.036 03:52:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.036 03:52:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.036 03:52:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.036 ************************************ 00:04:23.036 START TEST skip_rpc_with_delay 00:04:23.036 ************************************ 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:23.036 [2024-12-10 03:52:17.270662] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:23.036 00:04:23.036 real 0m0.075s 00:04:23.036 user 0m0.054s 00:04:23.036 sys 0m0.021s 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.036 03:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:23.036 ************************************ 00:04:23.036 END TEST skip_rpc_with_delay 00:04:23.036 ************************************ 00:04:23.036 03:52:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:23.036 03:52:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:23.036 03:52:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:23.036 03:52:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.036 03:52:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.036 03:52:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.036 ************************************ 00:04:23.036 START TEST exit_on_failed_rpc_init 00:04:23.036 ************************************ 00:04:23.036 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:23.036 03:52:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2266633 00:04:23.036 03:52:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:23.036 03:52:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2266633 00:04:23.036 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2266633 ']' 00:04:23.036 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.036 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.036 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.036 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.036 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:23.036 [2024-12-10 03:52:17.390762] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:23.036 [2024-12-10 03:52:17.390869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266633 ] 00:04:23.294 [2024-12-10 03:52:17.458198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.294 [2024-12-10 03:52:17.519037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.553 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.553 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:23.553 03:52:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.553 03:52:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.553 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:23.553 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.553 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.553 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:23.553 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.553 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:23.553 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.553 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:23.553 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.553 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:23.553 03:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.553 [2024-12-10 03:52:17.843655] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:23.553 [2024-12-10 03:52:17.843752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266743 ] 00:04:23.553 [2024-12-10 03:52:17.909329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.811 [2024-12-10 03:52:17.969883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.811 [2024-12-10 03:52:17.970013] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:23.811 [2024-12-10 03:52:17.970033] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:23.811 [2024-12-10 03:52:17.970044] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2266633 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2266633 ']' 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2266633 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2266633 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2266633' 00:04:23.811 killing process with pid 2266633 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2266633 00:04:23.811 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2266633 00:04:24.379 00:04:24.379 real 0m1.160s 00:04:24.379 user 0m1.285s 00:04:24.379 sys 0m0.424s 00:04:24.379 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.379 03:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.379 ************************************ 00:04:24.379 END TEST exit_on_failed_rpc_init 00:04:24.379 ************************************ 00:04:24.379 03:52:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:24.379 00:04:24.379 real 0m13.576s 00:04:24.379 user 0m12.881s 00:04:24.379 sys 0m1.598s 00:04:24.379 03:52:18 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.379 03:52:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.379 ************************************ 00:04:24.379 END TEST skip_rpc 00:04:24.379 ************************************ 00:04:24.379 03:52:18 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:24.379 03:52:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.379 03:52:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.379 03:52:18 -- common/autotest_common.sh@10 -- # set +x 00:04:24.379 ************************************ 00:04:24.379 START TEST rpc_client 00:04:24.379 ************************************ 00:04:24.379 03:52:18 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:24.379 * Looking for test storage... 00:04:24.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:24.379 03:52:18 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:24.379 03:52:18 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:24.379 03:52:18 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:24.379 03:52:18 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.379 03:52:18 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:24.379 03:52:18 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.379 03:52:18 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:24.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.379 --rc genhtml_branch_coverage=1 00:04:24.379 --rc genhtml_function_coverage=1 00:04:24.379 --rc genhtml_legend=1 00:04:24.379 --rc geninfo_all_blocks=1 00:04:24.379 --rc geninfo_unexecuted_blocks=1 00:04:24.379 00:04:24.379 ' 00:04:24.379 03:52:18 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:24.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.379 --rc genhtml_branch_coverage=1 00:04:24.379 --rc genhtml_function_coverage=1 00:04:24.379 --rc genhtml_legend=1 00:04:24.379 --rc geninfo_all_blocks=1 00:04:24.379 --rc geninfo_unexecuted_blocks=1 00:04:24.379 00:04:24.379 ' 00:04:24.379 03:52:18 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:24.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.379 --rc genhtml_branch_coverage=1 00:04:24.379 --rc genhtml_function_coverage=1 00:04:24.379 --rc genhtml_legend=1 00:04:24.379 --rc geninfo_all_blocks=1 00:04:24.379 --rc geninfo_unexecuted_blocks=1 00:04:24.379 00:04:24.379 ' 00:04:24.379 03:52:18 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:24.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.379 --rc genhtml_branch_coverage=1 00:04:24.379 --rc genhtml_function_coverage=1 00:04:24.379 --rc genhtml_legend=1 00:04:24.379 --rc geninfo_all_blocks=1 00:04:24.379 --rc geninfo_unexecuted_blocks=1 00:04:24.379 00:04:24.379 ' 00:04:24.379 03:52:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:24.379 OK 00:04:24.379 03:52:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:24.379 00:04:24.379 real 0m0.164s 00:04:24.379 user 0m0.106s 00:04:24.379 sys 0m0.067s 00:04:24.379 03:52:18 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.379 03:52:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:24.379 ************************************ 00:04:24.379 END TEST rpc_client 00:04:24.379 ************************************ 00:04:24.379 03:52:18 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:24.379 03:52:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.379 03:52:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.379 03:52:18 -- common/autotest_common.sh@10 -- # set +x 00:04:24.638 ************************************ 00:04:24.638 START TEST json_config 00:04:24.638 ************************************ 00:04:24.638 03:52:18 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:24.638 03:52:18 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:24.638 03:52:18 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:24.638 03:52:18 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:24.638 03:52:18 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:24.638 03:52:18 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.638 03:52:18 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.638 03:52:18 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.638 03:52:18 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.638 03:52:18 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.638 03:52:18 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.638 03:52:18 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.638 03:52:18 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.638 03:52:18 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.638 03:52:18 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.638 03:52:18 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.638 03:52:18 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:24.638 03:52:18 json_config -- scripts/common.sh@345 -- # : 1 00:04:24.638 03:52:18 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.638 03:52:18 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.638 03:52:18 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:24.638 03:52:18 json_config -- scripts/common.sh@353 -- # local d=1 00:04:24.638 03:52:18 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.638 03:52:18 json_config -- scripts/common.sh@355 -- # echo 1 00:04:24.638 03:52:18 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.638 03:52:18 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:24.638 03:52:18 json_config -- scripts/common.sh@353 -- # local d=2 00:04:24.638 03:52:18 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.638 03:52:18 json_config -- scripts/common.sh@355 -- # echo 2 00:04:24.639 03:52:18 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.639 03:52:18 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.639 03:52:18 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.639 03:52:18 json_config -- scripts/common.sh@368 -- # return 0 00:04:24.639 03:52:18 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.639 03:52:18 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:24.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.639 --rc genhtml_branch_coverage=1 00:04:24.639 --rc genhtml_function_coverage=1 00:04:24.639 --rc genhtml_legend=1 00:04:24.639 --rc geninfo_all_blocks=1 00:04:24.639 --rc geninfo_unexecuted_blocks=1 00:04:24.639 00:04:24.639 ' 00:04:24.639 03:52:18 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:24.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.639 --rc genhtml_branch_coverage=1 00:04:24.639 --rc genhtml_function_coverage=1 00:04:24.639 --rc genhtml_legend=1 00:04:24.639 --rc geninfo_all_blocks=1 00:04:24.639 --rc geninfo_unexecuted_blocks=1 00:04:24.639 00:04:24.639 ' 00:04:24.639 03:52:18 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:24.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.639 --rc genhtml_branch_coverage=1 00:04:24.639 --rc genhtml_function_coverage=1 00:04:24.639 --rc genhtml_legend=1 00:04:24.639 --rc geninfo_all_blocks=1 00:04:24.639 --rc geninfo_unexecuted_blocks=1 00:04:24.639 00:04:24.639 ' 00:04:24.639 03:52:18 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:24.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.639 --rc genhtml_branch_coverage=1 00:04:24.639 --rc genhtml_function_coverage=1 00:04:24.639 --rc genhtml_legend=1 00:04:24.639 --rc geninfo_all_blocks=1 00:04:24.639 --rc geninfo_unexecuted_blocks=1 00:04:24.639 00:04:24.639 ' 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:24.639 03:52:18 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:24.639 03:52:18 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.639 03:52:18 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.639 03:52:18 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.639 03:52:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.639 03:52:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.639 03:52:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.639 03:52:18 json_config -- paths/export.sh@5 -- # export PATH 00:04:24.639 03:52:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@51 -- # : 0 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:24.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:24.639 03:52:18 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:24.639 INFO: JSON configuration test init 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:24.639 03:52:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.639 03:52:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:24.639 03:52:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.639 03:52:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.639 03:52:18 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:24.639 03:52:18 json_config -- json_config/common.sh@9 -- # local app=target 00:04:24.639 03:52:18 json_config -- json_config/common.sh@10 -- # shift 00:04:24.639 03:52:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.639 03:52:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.639 03:52:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.639 03:52:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.639 03:52:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.639 03:52:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2267023 00:04:24.639 03:52:18 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:24.639 03:52:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.639 Waiting for target to run... 00:04:24.639 03:52:18 json_config -- json_config/common.sh@25 -- # waitforlisten 2267023 /var/tmp/spdk_tgt.sock 00:04:24.639 03:52:18 json_config -- common/autotest_common.sh@835 -- # '[' -z 2267023 ']' 00:04:24.639 03:52:18 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.639 03:52:18 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.639 03:52:18 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.639 03:52:18 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.639 03:52:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.639 [2024-12-10 03:52:18.989476] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:24.639 [2024-12-10 03:52:18.989581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2267023 ] 00:04:25.208 [2024-12-10 03:52:19.524559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.208 [2024-12-10 03:52:19.574355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.773 03:52:19 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.773 03:52:19 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:25.773 03:52:19 json_config -- json_config/common.sh@26 -- # echo '' 00:04:25.773 00:04:25.773 03:52:19 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:25.773 03:52:19 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:25.773 03:52:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.773 03:52:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.773 03:52:19 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:25.773 03:52:19 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:25.773 03:52:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.773 03:52:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.773 03:52:19 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:25.773 03:52:19 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:25.773 03:52:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:29.058 03:52:23 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:29.058 03:52:23 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:29.058 03:52:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.058 03:52:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.058 03:52:23 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:29.058 03:52:23 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:29.058 03:52:23 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:29.058 03:52:23 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:29.058 03:52:23 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:29.058 03:52:23 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:29.058 03:52:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:29.058 03:52:23 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@54 -- # sort 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:29.316 03:52:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.316 03:52:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:29.316 03:52:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.316 03:52:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:29.316 03:52:23 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:29.316 03:52:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:29.574 MallocForNvmf0 00:04:29.574 03:52:23 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:29.574 03:52:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:29.831 MallocForNvmf1 00:04:29.831 03:52:24 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:29.831 03:52:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.089 [2024-12-10 03:52:24.267933] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.089 03:52:24 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.089 03:52:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.346 03:52:24 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.346 03:52:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.604 03:52:24 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:30.604 03:52:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:30.879 03:52:25 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:30.879 03:52:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:31.141 [2024-12-10 03:52:25.339346] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.141 03:52:25 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:31.141 03:52:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.141 03:52:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.141 03:52:25 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:31.141 03:52:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.141 03:52:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.141 03:52:25 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:31.141 03:52:25 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.141 03:52:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.398 MallocBdevForConfigChangeCheck 00:04:31.398 03:52:25 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:31.398 03:52:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.398 03:52:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.398 03:52:25 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:31.398 03:52:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.962 03:52:26 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:31.963 INFO: shutting down applications... 00:04:31.963 03:52:26 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:31.963 03:52:26 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:31.963 03:52:26 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:31.963 03:52:26 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:33.859 Calling clear_iscsi_subsystem 00:04:33.859 Calling clear_nvmf_subsystem 00:04:33.859 Calling clear_nbd_subsystem 00:04:33.859 Calling clear_ublk_subsystem 00:04:33.859 Calling clear_vhost_blk_subsystem 00:04:33.859 Calling clear_vhost_scsi_subsystem 00:04:33.859 Calling clear_bdev_subsystem 00:04:33.859 03:52:27 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:33.859 03:52:27 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:33.859 03:52:27 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:33.859 03:52:27 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:33.859 03:52:27 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:33.859 03:52:27 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:33.859 03:52:28 json_config -- json_config/json_config.sh@352 -- # break 00:04:33.859 03:52:28 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:33.859 03:52:28 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:33.859 03:52:28 json_config -- json_config/common.sh@31 -- # local app=target 00:04:33.859 03:52:28 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.859 03:52:28 json_config -- json_config/common.sh@35 -- # [[ -n 2267023 ]] 00:04:33.859 03:52:28 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2267023 00:04:33.859 03:52:28 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.859 03:52:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.859 03:52:28 json_config -- json_config/common.sh@41 -- # kill -0 2267023 00:04:33.859 03:52:28 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.427 03:52:28 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.427 03:52:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.427 03:52:28 json_config -- json_config/common.sh@41 -- # kill -0 2267023 00:04:34.427 03:52:28 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:34.427 03:52:28 json_config -- json_config/common.sh@43 -- # break 00:04:34.427 03:52:28 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:34.427 03:52:28 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:34.427 SPDK target shutdown done 00:04:34.427 03:52:28 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:34.427 INFO: relaunching applications... 00:04:34.427 03:52:28 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.427 03:52:28 json_config -- json_config/common.sh@9 -- # local app=target 00:04:34.427 03:52:28 json_config -- json_config/common.sh@10 -- # shift 00:04:34.427 03:52:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:34.427 03:52:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:34.427 03:52:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:34.427 03:52:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.427 03:52:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.428 03:52:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2268227 00:04:34.428 03:52:28 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.428 03:52:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:34.428 Waiting for target to run... 00:04:34.428 03:52:28 json_config -- json_config/common.sh@25 -- # waitforlisten 2268227 /var/tmp/spdk_tgt.sock 00:04:34.428 03:52:28 json_config -- common/autotest_common.sh@835 -- # '[' -z 2268227 ']' 00:04:34.428 03:52:28 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:34.428 03:52:28 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.428 03:52:28 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:34.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:34.428 03:52:28 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.428 03:52:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.428 [2024-12-10 03:52:28.769243] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:34.428 [2024-12-10 03:52:28.769332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268227 ] 00:04:34.995 [2024-12-10 03:52:29.314701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.995 [2024-12-10 03:52:29.365915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.279 [2024-12-10 03:52:32.421141] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:38.279 [2024-12-10 03:52:32.453648] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:38.279 03:52:32 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.279 03:52:32 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:38.279 03:52:32 json_config -- json_config/common.sh@26 -- # echo '' 00:04:38.279 00:04:38.279 03:52:32 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:38.279 03:52:32 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:38.279 INFO: Checking if target configuration is the same... 00:04:38.279 03:52:32 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.279 03:52:32 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:38.279 03:52:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.279 + '[' 2 -ne 2 ']' 00:04:38.279 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:38.279 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:38.279 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:38.279 +++ basename /dev/fd/62 00:04:38.279 ++ mktemp /tmp/62.XXX 00:04:38.279 + tmp_file_1=/tmp/62.AXD 00:04:38.279 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.279 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:38.279 + tmp_file_2=/tmp/spdk_tgt_config.json.93K 00:04:38.279 + ret=0 00:04:38.279 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:38.537 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:38.795 + diff -u /tmp/62.AXD /tmp/spdk_tgt_config.json.93K 00:04:38.795 + echo 'INFO: JSON config files are the same' 00:04:38.795 INFO: JSON config files are the same 00:04:38.795 + rm /tmp/62.AXD /tmp/spdk_tgt_config.json.93K 00:04:38.795 + exit 0 00:04:38.795 03:52:32 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:38.795 03:52:32 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:38.795 INFO: changing configuration and checking if this can be detected... 00:04:38.795 03:52:32 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:38.795 03:52:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:39.053 03:52:33 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.053 03:52:33 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:39.053 03:52:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.053 + '[' 2 -ne 2 ']' 00:04:39.053 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:39.053 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:39.053 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:39.053 +++ basename /dev/fd/62 00:04:39.053 ++ mktemp /tmp/62.XXX 00:04:39.053 + tmp_file_1=/tmp/62.8QQ 00:04:39.053 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.053 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:39.053 + tmp_file_2=/tmp/spdk_tgt_config.json.yMD 00:04:39.053 + ret=0 00:04:39.053 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.311 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.311 + diff -u /tmp/62.8QQ /tmp/spdk_tgt_config.json.yMD 00:04:39.311 + ret=1 00:04:39.311 + echo '=== Start of file: /tmp/62.8QQ ===' 00:04:39.311 + cat /tmp/62.8QQ 00:04:39.311 + echo '=== End of file: /tmp/62.8QQ ===' 00:04:39.311 + echo '' 00:04:39.311 + echo '=== Start of file: /tmp/spdk_tgt_config.json.yMD ===' 00:04:39.311 + cat /tmp/spdk_tgt_config.json.yMD 00:04:39.311 + echo '=== End of file: /tmp/spdk_tgt_config.json.yMD ===' 00:04:39.311 + echo '' 00:04:39.311 + rm /tmp/62.8QQ /tmp/spdk_tgt_config.json.yMD 00:04:39.311 + exit 1 00:04:39.311 03:52:33 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:39.311 INFO: configuration change detected. 00:04:39.311 03:52:33 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:39.311 03:52:33 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:39.311 03:52:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:39.311 03:52:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.311 03:52:33 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:39.311 03:52:33 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:39.311 03:52:33 json_config -- json_config/json_config.sh@324 -- # [[ -n 2268227 ]] 00:04:39.311 03:52:33 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:39.311 03:52:33 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:39.311 03:52:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:39.311 03:52:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.311 03:52:33 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:39.311 03:52:33 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:39.311 03:52:33 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:39.311 03:52:33 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:39.311 03:52:33 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:39.311 03:52:33 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:39.311 03:52:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:39.311 03:52:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.311 03:52:33 json_config -- json_config/json_config.sh@330 -- # killprocess 2268227 00:04:39.311 03:52:33 json_config -- common/autotest_common.sh@954 -- # '[' -z 2268227 ']' 00:04:39.311 03:52:33 json_config -- common/autotest_common.sh@958 -- # kill -0 2268227 00:04:39.311 03:52:33 json_config -- common/autotest_common.sh@959 -- # uname 00:04:39.311 03:52:33 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.311 03:52:33 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2268227 00:04:39.569 03:52:33 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.569 03:52:33 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.569 03:52:33 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2268227' 00:04:39.569 killing process with pid 2268227 00:04:39.569 03:52:33 json_config -- common/autotest_common.sh@973 -- # kill 2268227 00:04:39.569 03:52:33 json_config -- common/autotest_common.sh@978 -- # wait 2268227 00:04:41.466 03:52:35 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:41.466 03:52:35 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:41.466 03:52:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:41.466 03:52:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.466 03:52:35 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:41.466 03:52:35 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:41.466 INFO: Success 00:04:41.466 00:04:41.466 real 0m16.623s 00:04:41.466 user 0m17.956s 00:04:41.466 sys 0m2.889s 00:04:41.466 03:52:35 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.466 03:52:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.466 ************************************ 00:04:41.466 END TEST json_config 00:04:41.466 ************************************ 00:04:41.466 03:52:35 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:41.466 03:52:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.466 03:52:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.466 03:52:35 -- common/autotest_common.sh@10 -- # set +x 00:04:41.466 ************************************ 00:04:41.466 START TEST json_config_extra_key 00:04:41.466 ************************************ 00:04:41.466 03:52:35 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:41.466 03:52:35 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.466 03:52:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.466 03:52:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.466 03:52:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.466 03:52:35 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:41.466 03:52:35 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.466 03:52:35 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.467 --rc genhtml_branch_coverage=1 00:04:41.467 --rc genhtml_function_coverage=1 00:04:41.467 --rc genhtml_legend=1 00:04:41.467 --rc geninfo_all_blocks=1 00:04:41.467 --rc geninfo_unexecuted_blocks=1 00:04:41.467 00:04:41.467 ' 00:04:41.467 03:52:35 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.467 --rc genhtml_branch_coverage=1 00:04:41.467 --rc genhtml_function_coverage=1 00:04:41.467 --rc genhtml_legend=1 00:04:41.467 --rc geninfo_all_blocks=1 00:04:41.467 --rc geninfo_unexecuted_blocks=1 00:04:41.467 00:04:41.467 ' 00:04:41.467 03:52:35 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.467 --rc genhtml_branch_coverage=1 00:04:41.467 --rc genhtml_function_coverage=1 00:04:41.467 --rc genhtml_legend=1 00:04:41.467 --rc geninfo_all_blocks=1 00:04:41.467 --rc geninfo_unexecuted_blocks=1 00:04:41.467 00:04:41.467 ' 00:04:41.467 03:52:35 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.467 --rc genhtml_branch_coverage=1 00:04:41.467 --rc genhtml_function_coverage=1 00:04:41.467 --rc genhtml_legend=1 00:04:41.467 --rc geninfo_all_blocks=1 00:04:41.467 --rc geninfo_unexecuted_blocks=1 00:04:41.467 00:04:41.467 ' 00:04:41.467 03:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:41.467 03:52:35 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.467 03:52:35 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.467 03:52:35 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.467 03:52:35 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.467 03:52:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.467 03:52:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.467 03:52:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.467 03:52:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:41.467 03:52:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.467 03:52:35 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.467 03:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:41.467 03:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:41.467 03:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:41.467 03:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:41.467 03:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:41.467 03:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:41.467 03:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:41.467 03:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:41.467 03:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:41.467 03:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.467 03:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:41.467 INFO: launching applications... 00:04:41.467 03:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:41.467 03:52:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:41.467 03:52:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:41.467 03:52:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.467 03:52:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.467 03:52:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.467 03:52:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.467 03:52:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.467 03:52:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2269148 00:04:41.467 03:52:35 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:41.467 03:52:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.467 Waiting for target to run... 00:04:41.467 03:52:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2269148 /var/tmp/spdk_tgt.sock 00:04:41.467 03:52:35 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2269148 ']' 00:04:41.467 03:52:35 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.467 03:52:35 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.467 03:52:35 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.467 03:52:35 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.467 03:52:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:41.467 [2024-12-10 03:52:35.647766] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:41.467 [2024-12-10 03:52:35.647845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269148 ] 00:04:42.035 [2024-12-10 03:52:36.153374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.035 [2024-12-10 03:52:36.199845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.292 03:52:36 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.292 03:52:36 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:42.292 03:52:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:42.292 00:04:42.292 03:52:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:42.292 INFO: shutting down applications... 00:04:42.292 03:52:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:42.292 03:52:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:42.292 03:52:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:42.292 03:52:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2269148 ]] 00:04:42.292 03:52:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2269148 00:04:42.292 03:52:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:42.292 03:52:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.292 03:52:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2269148 00:04:42.292 03:52:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.859 03:52:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.859 03:52:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.859 03:52:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2269148 00:04:42.859 03:52:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:42.859 03:52:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:42.859 03:52:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:42.859 03:52:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:42.859 SPDK target shutdown done 00:04:42.859 03:52:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:42.859 Success 00:04:42.859 00:04:42.859 real 0m1.688s 00:04:42.859 user 0m1.532s 00:04:42.859 sys 0m0.624s 00:04:42.859 03:52:37 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.859 03:52:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:42.859 ************************************ 00:04:42.859 END TEST json_config_extra_key 00:04:42.859 ************************************ 00:04:42.859 03:52:37 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:42.859 03:52:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.859 03:52:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.859 03:52:37 -- common/autotest_common.sh@10 -- # set +x 00:04:42.859 ************************************ 00:04:42.859 START TEST alias_rpc 00:04:42.859 ************************************ 00:04:42.859 03:52:37 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.117 * Looking for test storage... 00:04:43.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:43.117 03:52:37 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:43.117 03:52:37 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:43.117 03:52:37 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:43.117 03:52:37 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.117 03:52:37 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:43.117 03:52:37 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.117 03:52:37 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:43.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.117 --rc genhtml_branch_coverage=1 00:04:43.117 --rc genhtml_function_coverage=1 00:04:43.117 --rc genhtml_legend=1 00:04:43.117 --rc geninfo_all_blocks=1 00:04:43.117 --rc geninfo_unexecuted_blocks=1 00:04:43.117 00:04:43.117 ' 00:04:43.117 03:52:37 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:43.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.117 --rc genhtml_branch_coverage=1 00:04:43.117 --rc genhtml_function_coverage=1 00:04:43.117 --rc genhtml_legend=1 00:04:43.117 --rc geninfo_all_blocks=1 00:04:43.117 --rc geninfo_unexecuted_blocks=1 00:04:43.117 00:04:43.117 ' 00:04:43.117 03:52:37 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:43.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.117 --rc genhtml_branch_coverage=1 00:04:43.117 --rc genhtml_function_coverage=1 00:04:43.117 --rc genhtml_legend=1 00:04:43.117 --rc geninfo_all_blocks=1 00:04:43.117 --rc geninfo_unexecuted_blocks=1 00:04:43.117 00:04:43.117 ' 00:04:43.117 03:52:37 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:43.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.117 --rc genhtml_branch_coverage=1 00:04:43.117 --rc genhtml_function_coverage=1 00:04:43.117 --rc genhtml_legend=1 00:04:43.117 --rc geninfo_all_blocks=1 00:04:43.117 --rc geninfo_unexecuted_blocks=1 00:04:43.117 00:04:43.117 ' 00:04:43.117 03:52:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:43.117 03:52:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2269465 00:04:43.117 03:52:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.117 03:52:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2269465 00:04:43.117 03:52:37 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2269465 ']' 00:04:43.117 03:52:37 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.117 03:52:37 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.117 03:52:37 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.117 03:52:37 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.117 03:52:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.117 [2024-12-10 03:52:37.389076] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:43.118 [2024-12-10 03:52:37.389175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269465 ] 00:04:43.118 [2024-12-10 03:52:37.454115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.376 [2024-12-10 03:52:37.511973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.634 03:52:37 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.634 03:52:37 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:43.634 03:52:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:43.892 03:52:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2269465 00:04:43.892 03:52:38 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2269465 ']' 00:04:43.892 03:52:38 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2269465 00:04:43.892 03:52:38 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:43.892 03:52:38 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.892 03:52:38 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2269465 00:04:43.892 03:52:38 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.892 03:52:38 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.892 03:52:38 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2269465' 00:04:43.892 killing process with pid 2269465 00:04:43.892 03:52:38 alias_rpc -- common/autotest_common.sh@973 -- # kill 2269465 00:04:43.892 03:52:38 alias_rpc -- common/autotest_common.sh@978 -- # wait 2269465 00:04:44.150 00:04:44.150 real 0m1.319s 00:04:44.150 user 0m1.444s 00:04:44.150 sys 0m0.425s 00:04:44.150 03:52:38 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.150 03:52:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.150 ************************************ 00:04:44.150 END TEST alias_rpc 00:04:44.150 ************************************ 00:04:44.408 03:52:38 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:44.408 03:52:38 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:44.408 03:52:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.408 03:52:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.408 03:52:38 -- common/autotest_common.sh@10 -- # set +x 00:04:44.408 ************************************ 00:04:44.408 START TEST spdkcli_tcp 00:04:44.408 ************************************ 00:04:44.408 03:52:38 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:44.408 * Looking for test storage... 00:04:44.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:44.408 03:52:38 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:44.408 03:52:38 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:44.408 03:52:38 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:44.408 03:52:38 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:44.408 03:52:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.409 03:52:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:44.409 03:52:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:44.409 03:52:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.409 03:52:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:44.409 03:52:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.409 03:52:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.409 03:52:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.409 03:52:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:44.409 03:52:38 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.409 03:52:38 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:44.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.409 --rc genhtml_branch_coverage=1 00:04:44.409 --rc genhtml_function_coverage=1 00:04:44.409 --rc genhtml_legend=1 00:04:44.409 --rc geninfo_all_blocks=1 00:04:44.409 --rc geninfo_unexecuted_blocks=1 00:04:44.409 00:04:44.409 ' 00:04:44.409 03:52:38 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:44.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.409 --rc genhtml_branch_coverage=1 00:04:44.409 --rc genhtml_function_coverage=1 00:04:44.409 --rc genhtml_legend=1 00:04:44.409 --rc geninfo_all_blocks=1 00:04:44.409 --rc geninfo_unexecuted_blocks=1 00:04:44.409 00:04:44.409 ' 00:04:44.409 03:52:38 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:44.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.409 --rc genhtml_branch_coverage=1 00:04:44.409 --rc genhtml_function_coverage=1 00:04:44.409 --rc genhtml_legend=1 00:04:44.409 --rc geninfo_all_blocks=1 00:04:44.409 --rc geninfo_unexecuted_blocks=1 00:04:44.409 00:04:44.409 ' 00:04:44.409 03:52:38 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:44.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.409 --rc genhtml_branch_coverage=1 00:04:44.409 --rc genhtml_function_coverage=1 00:04:44.409 --rc genhtml_legend=1 00:04:44.409 --rc geninfo_all_blocks=1 00:04:44.409 --rc geninfo_unexecuted_blocks=1 00:04:44.409 00:04:44.409 ' 00:04:44.409 03:52:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:44.409 03:52:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:44.409 03:52:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:44.409 03:52:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:44.409 03:52:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:44.409 03:52:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:44.409 03:52:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:44.409 03:52:38 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.409 03:52:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.409 03:52:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2269659 00:04:44.409 03:52:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:44.409 03:52:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2269659 00:04:44.409 03:52:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2269659 ']' 00:04:44.409 03:52:38 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.409 03:52:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.409 03:52:38 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.409 03:52:38 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.409 03:52:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.409 [2024-12-10 03:52:38.771936] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:44.409 [2024-12-10 03:52:38.772021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269659 ] 00:04:44.667 [2024-12-10 03:52:38.840597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:44.667 [2024-12-10 03:52:38.898445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.667 [2024-12-10 03:52:38.898449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.925 03:52:39 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.925 03:52:39 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:44.925 03:52:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2269731 00:04:44.925 03:52:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:44.925 03:52:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:45.184 [ 00:04:45.184 "bdev_malloc_delete", 00:04:45.184 "bdev_malloc_create", 00:04:45.184 "bdev_null_resize", 00:04:45.184 "bdev_null_delete", 00:04:45.184 "bdev_null_create", 00:04:45.184 "bdev_nvme_cuse_unregister", 00:04:45.184 "bdev_nvme_cuse_register", 00:04:45.184 "bdev_opal_new_user", 00:04:45.184 "bdev_opal_set_lock_state", 00:04:45.184 "bdev_opal_delete", 00:04:45.184 "bdev_opal_get_info", 00:04:45.184 "bdev_opal_create", 00:04:45.184 "bdev_nvme_opal_revert", 00:04:45.184 "bdev_nvme_opal_init", 00:04:45.184 "bdev_nvme_send_cmd", 00:04:45.184 "bdev_nvme_set_keys", 00:04:45.184 "bdev_nvme_get_path_iostat", 00:04:45.184 "bdev_nvme_get_mdns_discovery_info", 00:04:45.184 "bdev_nvme_stop_mdns_discovery", 00:04:45.184 "bdev_nvme_start_mdns_discovery", 00:04:45.184 "bdev_nvme_set_multipath_policy", 00:04:45.184 "bdev_nvme_set_preferred_path", 00:04:45.184 "bdev_nvme_get_io_paths", 00:04:45.184 "bdev_nvme_remove_error_injection", 00:04:45.184 "bdev_nvme_add_error_injection", 00:04:45.184 "bdev_nvme_get_discovery_info", 00:04:45.184 "bdev_nvme_stop_discovery", 00:04:45.184 "bdev_nvme_start_discovery", 00:04:45.184 "bdev_nvme_get_controller_health_info", 00:04:45.184 "bdev_nvme_disable_controller", 00:04:45.184 "bdev_nvme_enable_controller", 00:04:45.184 "bdev_nvme_reset_controller", 00:04:45.184 "bdev_nvme_get_transport_statistics", 00:04:45.184 "bdev_nvme_apply_firmware", 00:04:45.184 "bdev_nvme_detach_controller", 00:04:45.184 "bdev_nvme_get_controllers", 00:04:45.184 "bdev_nvme_attach_controller", 00:04:45.184 "bdev_nvme_set_hotplug", 00:04:45.184 "bdev_nvme_set_options", 00:04:45.184 "bdev_passthru_delete", 00:04:45.184 "bdev_passthru_create", 00:04:45.184 "bdev_lvol_set_parent_bdev", 00:04:45.184 "bdev_lvol_set_parent", 00:04:45.184 "bdev_lvol_check_shallow_copy", 00:04:45.184 "bdev_lvol_start_shallow_copy", 00:04:45.184 "bdev_lvol_grow_lvstore", 00:04:45.184 "bdev_lvol_get_lvols", 00:04:45.184 "bdev_lvol_get_lvstores", 00:04:45.184 "bdev_lvol_delete", 00:04:45.184 "bdev_lvol_set_read_only", 00:04:45.184 "bdev_lvol_resize", 00:04:45.184 "bdev_lvol_decouple_parent", 00:04:45.184 "bdev_lvol_inflate", 00:04:45.184 "bdev_lvol_rename", 00:04:45.184 "bdev_lvol_clone_bdev", 00:04:45.184 "bdev_lvol_clone", 00:04:45.184 "bdev_lvol_snapshot", 00:04:45.184 "bdev_lvol_create", 00:04:45.184 "bdev_lvol_delete_lvstore", 00:04:45.184 "bdev_lvol_rename_lvstore", 00:04:45.184 "bdev_lvol_create_lvstore", 00:04:45.184 "bdev_raid_set_options", 00:04:45.184 "bdev_raid_remove_base_bdev", 00:04:45.184 "bdev_raid_add_base_bdev", 00:04:45.184 "bdev_raid_delete", 00:04:45.184 "bdev_raid_create", 00:04:45.184 "bdev_raid_get_bdevs", 00:04:45.184 "bdev_error_inject_error", 00:04:45.184 "bdev_error_delete", 00:04:45.184 "bdev_error_create", 00:04:45.184 "bdev_split_delete", 00:04:45.184 "bdev_split_create", 00:04:45.184 "bdev_delay_delete", 00:04:45.184 "bdev_delay_create", 00:04:45.184 "bdev_delay_update_latency", 00:04:45.184 "bdev_zone_block_delete", 00:04:45.184 "bdev_zone_block_create", 00:04:45.184 "blobfs_create", 00:04:45.184 "blobfs_detect", 00:04:45.184 "blobfs_set_cache_size", 00:04:45.184 "bdev_aio_delete", 00:04:45.184 "bdev_aio_rescan", 00:04:45.184 "bdev_aio_create", 00:04:45.184 "bdev_ftl_set_property", 00:04:45.184 "bdev_ftl_get_properties", 00:04:45.184 "bdev_ftl_get_stats", 00:04:45.184 "bdev_ftl_unmap", 00:04:45.184 "bdev_ftl_unload", 00:04:45.184 "bdev_ftl_delete", 00:04:45.184 "bdev_ftl_load", 00:04:45.184 "bdev_ftl_create", 00:04:45.184 "bdev_virtio_attach_controller", 00:04:45.184 "bdev_virtio_scsi_get_devices", 00:04:45.184 "bdev_virtio_detach_controller", 00:04:45.184 "bdev_virtio_blk_set_hotplug", 00:04:45.184 "bdev_iscsi_delete", 00:04:45.184 "bdev_iscsi_create", 00:04:45.184 "bdev_iscsi_set_options", 00:04:45.184 "accel_error_inject_error", 00:04:45.184 "ioat_scan_accel_module", 00:04:45.184 "dsa_scan_accel_module", 00:04:45.184 "iaa_scan_accel_module", 00:04:45.184 "vfu_virtio_create_fs_endpoint", 00:04:45.184 "vfu_virtio_create_scsi_endpoint", 00:04:45.184 "vfu_virtio_scsi_remove_target", 00:04:45.184 "vfu_virtio_scsi_add_target", 00:04:45.184 "vfu_virtio_create_blk_endpoint", 00:04:45.184 "vfu_virtio_delete_endpoint", 00:04:45.184 "keyring_file_remove_key", 00:04:45.184 "keyring_file_add_key", 00:04:45.184 "keyring_linux_set_options", 00:04:45.184 "fsdev_aio_delete", 00:04:45.184 "fsdev_aio_create", 00:04:45.184 "iscsi_get_histogram", 00:04:45.184 "iscsi_enable_histogram", 00:04:45.184 "iscsi_set_options", 00:04:45.184 "iscsi_get_auth_groups", 00:04:45.184 "iscsi_auth_group_remove_secret", 00:04:45.184 "iscsi_auth_group_add_secret", 00:04:45.184 "iscsi_delete_auth_group", 00:04:45.184 "iscsi_create_auth_group", 00:04:45.184 "iscsi_set_discovery_auth", 00:04:45.184 "iscsi_get_options", 00:04:45.184 "iscsi_target_node_request_logout", 00:04:45.184 "iscsi_target_node_set_redirect", 00:04:45.184 "iscsi_target_node_set_auth", 00:04:45.184 "iscsi_target_node_add_lun", 00:04:45.184 "iscsi_get_stats", 00:04:45.184 "iscsi_get_connections", 00:04:45.184 "iscsi_portal_group_set_auth", 00:04:45.184 "iscsi_start_portal_group", 00:04:45.184 "iscsi_delete_portal_group", 00:04:45.184 "iscsi_create_portal_group", 00:04:45.184 "iscsi_get_portal_groups", 00:04:45.184 "iscsi_delete_target_node", 00:04:45.184 "iscsi_target_node_remove_pg_ig_maps", 00:04:45.184 "iscsi_target_node_add_pg_ig_maps", 00:04:45.184 "iscsi_create_target_node", 00:04:45.184 "iscsi_get_target_nodes", 00:04:45.184 "iscsi_delete_initiator_group", 00:04:45.184 "iscsi_initiator_group_remove_initiators", 00:04:45.184 "iscsi_initiator_group_add_initiators", 00:04:45.184 "iscsi_create_initiator_group", 00:04:45.184 "iscsi_get_initiator_groups", 00:04:45.184 "nvmf_set_crdt", 00:04:45.184 "nvmf_set_config", 00:04:45.184 "nvmf_set_max_subsystems", 00:04:45.184 "nvmf_stop_mdns_prr", 00:04:45.184 "nvmf_publish_mdns_prr", 00:04:45.184 "nvmf_subsystem_get_listeners", 00:04:45.184 "nvmf_subsystem_get_qpairs", 00:04:45.184 "nvmf_subsystem_get_controllers", 00:04:45.184 "nvmf_get_stats", 00:04:45.184 "nvmf_get_transports", 00:04:45.184 "nvmf_create_transport", 00:04:45.184 "nvmf_get_targets", 00:04:45.184 "nvmf_delete_target", 00:04:45.184 "nvmf_create_target", 00:04:45.184 "nvmf_subsystem_allow_any_host", 00:04:45.184 "nvmf_subsystem_set_keys", 00:04:45.184 "nvmf_subsystem_remove_host", 00:04:45.184 "nvmf_subsystem_add_host", 00:04:45.184 "nvmf_ns_remove_host", 00:04:45.184 "nvmf_ns_add_host", 00:04:45.184 "nvmf_subsystem_remove_ns", 00:04:45.184 "nvmf_subsystem_set_ns_ana_group", 00:04:45.184 "nvmf_subsystem_add_ns", 00:04:45.184 "nvmf_subsystem_listener_set_ana_state", 00:04:45.184 "nvmf_discovery_get_referrals", 00:04:45.184 "nvmf_discovery_remove_referral", 00:04:45.184 "nvmf_discovery_add_referral", 00:04:45.184 "nvmf_subsystem_remove_listener", 00:04:45.184 "nvmf_subsystem_add_listener", 00:04:45.185 "nvmf_delete_subsystem", 00:04:45.185 "nvmf_create_subsystem", 00:04:45.185 "nvmf_get_subsystems", 00:04:45.185 "env_dpdk_get_mem_stats", 00:04:45.185 "nbd_get_disks", 00:04:45.185 "nbd_stop_disk", 00:04:45.185 "nbd_start_disk", 00:04:45.185 "ublk_recover_disk", 00:04:45.185 "ublk_get_disks", 00:04:45.185 "ublk_stop_disk", 00:04:45.185 "ublk_start_disk", 00:04:45.185 "ublk_destroy_target", 00:04:45.185 "ublk_create_target", 00:04:45.185 "virtio_blk_create_transport", 00:04:45.185 "virtio_blk_get_transports", 00:04:45.185 "vhost_controller_set_coalescing", 00:04:45.185 "vhost_get_controllers", 00:04:45.185 "vhost_delete_controller", 00:04:45.185 "vhost_create_blk_controller", 00:04:45.185 "vhost_scsi_controller_remove_target", 00:04:45.185 "vhost_scsi_controller_add_target", 00:04:45.185 "vhost_start_scsi_controller", 00:04:45.185 "vhost_create_scsi_controller", 00:04:45.185 "thread_set_cpumask", 00:04:45.185 "scheduler_set_options", 00:04:45.185 "framework_get_governor", 00:04:45.185 "framework_get_scheduler", 00:04:45.185 "framework_set_scheduler", 00:04:45.185 "framework_get_reactors", 00:04:45.185 "thread_get_io_channels", 00:04:45.185 "thread_get_pollers", 00:04:45.185 "thread_get_stats", 00:04:45.185 "framework_monitor_context_switch", 00:04:45.185 "spdk_kill_instance", 00:04:45.185 "log_enable_timestamps", 00:04:45.185 "log_get_flags", 00:04:45.185 "log_clear_flag", 00:04:45.185 "log_set_flag", 00:04:45.185 "log_get_level", 00:04:45.185 "log_set_level", 00:04:45.185 "log_get_print_level", 00:04:45.185 "log_set_print_level", 00:04:45.185 "framework_enable_cpumask_locks", 00:04:45.185 "framework_disable_cpumask_locks", 00:04:45.185 "framework_wait_init", 00:04:45.185 "framework_start_init", 00:04:45.185 "scsi_get_devices", 00:04:45.185 "bdev_get_histogram", 00:04:45.185 "bdev_enable_histogram", 00:04:45.185 "bdev_set_qos_limit", 00:04:45.185 "bdev_set_qd_sampling_period", 00:04:45.185 "bdev_get_bdevs", 00:04:45.185 "bdev_reset_iostat", 00:04:45.185 "bdev_get_iostat", 00:04:45.185 "bdev_examine", 00:04:45.185 "bdev_wait_for_examine", 00:04:45.185 "bdev_set_options", 00:04:45.185 "accel_get_stats", 00:04:45.185 "accel_set_options", 00:04:45.185 "accel_set_driver", 00:04:45.185 "accel_crypto_key_destroy", 00:04:45.185 "accel_crypto_keys_get", 00:04:45.185 "accel_crypto_key_create", 00:04:45.185 "accel_assign_opc", 00:04:45.185 "accel_get_module_info", 00:04:45.185 "accel_get_opc_assignments", 00:04:45.185 "vmd_rescan", 00:04:45.185 "vmd_remove_device", 00:04:45.185 "vmd_enable", 00:04:45.185 "sock_get_default_impl", 00:04:45.185 "sock_set_default_impl", 00:04:45.185 "sock_impl_set_options", 00:04:45.185 "sock_impl_get_options", 00:04:45.185 "iobuf_get_stats", 00:04:45.185 "iobuf_set_options", 00:04:45.185 "keyring_get_keys", 00:04:45.185 "vfu_tgt_set_base_path", 00:04:45.185 "framework_get_pci_devices", 00:04:45.185 "framework_get_config", 00:04:45.185 "framework_get_subsystems", 00:04:45.185 "fsdev_set_opts", 00:04:45.185 "fsdev_get_opts", 00:04:45.185 "trace_get_info", 00:04:45.185 "trace_get_tpoint_group_mask", 00:04:45.185 "trace_disable_tpoint_group", 00:04:45.185 "trace_enable_tpoint_group", 00:04:45.185 "trace_clear_tpoint_mask", 00:04:45.185 "trace_set_tpoint_mask", 00:04:45.185 "notify_get_notifications", 00:04:45.185 "notify_get_types", 00:04:45.185 "spdk_get_version", 00:04:45.185 "rpc_get_methods" 00:04:45.185 ] 00:04:45.185 03:52:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:45.185 03:52:39 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:45.185 03:52:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.185 03:52:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:45.185 03:52:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2269659 00:04:45.185 03:52:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2269659 ']' 00:04:45.185 03:52:39 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2269659 00:04:45.185 03:52:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:45.185 03:52:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.185 03:52:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2269659 00:04:45.185 03:52:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.185 03:52:39 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.185 03:52:39 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2269659' 00:04:45.185 killing process with pid 2269659 00:04:45.185 03:52:39 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2269659 00:04:45.185 03:52:39 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2269659 00:04:45.752 00:04:45.752 real 0m1.355s 00:04:45.752 user 0m2.421s 00:04:45.752 sys 0m0.462s 00:04:45.752 03:52:39 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.752 03:52:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.752 ************************************ 00:04:45.752 END TEST spdkcli_tcp 00:04:45.752 ************************************ 00:04:45.752 03:52:39 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:45.752 03:52:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.752 03:52:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.752 03:52:39 -- common/autotest_common.sh@10 -- # set +x 00:04:45.752 ************************************ 00:04:45.752 START TEST dpdk_mem_utility 00:04:45.752 ************************************ 00:04:45.752 03:52:39 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:45.752 * Looking for test storage... 00:04:45.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:45.752 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:45.752 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:45.752 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:45.752 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.752 03:52:40 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:45.752 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.752 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:45.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.752 --rc genhtml_branch_coverage=1 00:04:45.752 --rc genhtml_function_coverage=1 00:04:45.752 --rc genhtml_legend=1 00:04:45.752 --rc geninfo_all_blocks=1 00:04:45.752 --rc geninfo_unexecuted_blocks=1 00:04:45.752 00:04:45.752 ' 00:04:45.752 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:45.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.752 --rc genhtml_branch_coverage=1 00:04:45.752 --rc genhtml_function_coverage=1 00:04:45.752 --rc genhtml_legend=1 00:04:45.752 --rc geninfo_all_blocks=1 00:04:45.752 --rc geninfo_unexecuted_blocks=1 00:04:45.752 00:04:45.752 ' 00:04:45.752 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:45.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.752 --rc genhtml_branch_coverage=1 00:04:45.752 --rc genhtml_function_coverage=1 00:04:45.752 --rc genhtml_legend=1 00:04:45.752 --rc geninfo_all_blocks=1 00:04:45.752 --rc geninfo_unexecuted_blocks=1 00:04:45.752 00:04:45.752 ' 00:04:45.752 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:45.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.752 --rc genhtml_branch_coverage=1 00:04:45.752 --rc genhtml_function_coverage=1 00:04:45.752 --rc genhtml_legend=1 00:04:45.752 --rc geninfo_all_blocks=1 00:04:45.752 --rc geninfo_unexecuted_blocks=1 00:04:45.752 00:04:45.752 ' 00:04:45.752 03:52:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:45.752 03:52:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2269878 00:04:45.752 03:52:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.752 03:52:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2269878 00:04:45.752 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2269878 ']' 00:04:45.752 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.752 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.752 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.752 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.752 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.011 [2024-12-10 03:52:40.180327] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:46.011 [2024-12-10 03:52:40.180429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269878 ] 00:04:46.011 [2024-12-10 03:52:40.250733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.011 [2024-12-10 03:52:40.312179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.269 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.269 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:46.269 03:52:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:46.269 03:52:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:46.269 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.269 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.269 { 00:04:46.269 "filename": "/tmp/spdk_mem_dump.txt" 00:04:46.269 } 00:04:46.269 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.269 03:52:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:46.269 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:46.269 1 heaps totaling size 818.000000 MiB 00:04:46.269 size: 818.000000 MiB heap id: 0 00:04:46.269 end heaps---------- 00:04:46.269 9 mempools totaling size 603.782043 MiB 00:04:46.269 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:46.269 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:46.269 size: 100.555481 MiB name: bdev_io_2269878 00:04:46.269 size: 50.003479 MiB name: msgpool_2269878 00:04:46.269 size: 36.509338 MiB name: fsdev_io_2269878 00:04:46.269 size: 21.763794 MiB name: PDU_Pool 00:04:46.269 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:46.269 size: 4.133484 MiB name: evtpool_2269878 00:04:46.269 size: 0.026123 MiB name: Session_Pool 00:04:46.269 end mempools------- 00:04:46.269 6 memzones totaling size 4.142822 MiB 00:04:46.270 size: 1.000366 MiB name: RG_ring_0_2269878 00:04:46.270 size: 1.000366 MiB name: RG_ring_1_2269878 00:04:46.270 size: 1.000366 MiB name: RG_ring_4_2269878 00:04:46.270 size: 1.000366 MiB name: RG_ring_5_2269878 00:04:46.270 size: 0.125366 MiB name: RG_ring_2_2269878 00:04:46.270 size: 0.015991 MiB name: RG_ring_3_2269878 00:04:46.270 end memzones------- 00:04:46.527 03:52:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:46.527 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:46.527 list of free elements. size: 10.852478 MiB 00:04:46.527 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:46.527 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:46.527 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:46.527 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:46.527 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:46.527 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:46.527 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:46.527 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:46.527 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:46.527 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:46.527 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:46.527 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:46.527 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:46.527 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:46.527 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:46.527 list of standard malloc elements. size: 199.218628 MiB 00:04:46.527 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:46.527 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:46.527 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:46.527 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:46.527 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:46.527 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:46.527 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:46.527 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:46.527 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:46.527 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:46.527 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:46.527 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:46.527 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:46.527 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:46.527 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:46.527 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:46.527 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:46.527 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:46.527 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:46.527 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:46.527 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:46.527 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:46.527 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:46.527 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:46.527 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:46.527 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:46.527 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:46.527 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:46.527 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:46.527 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:46.527 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:46.527 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:46.527 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:46.527 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:46.527 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:46.527 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:46.527 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:46.527 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:46.527 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:46.527 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:46.527 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:46.527 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:46.527 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:46.527 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:46.527 list of memzone associated elements. size: 607.928894 MiB 00:04:46.527 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:46.527 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:46.527 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:46.527 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:46.527 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:46.527 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2269878_0 00:04:46.527 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:46.527 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2269878_0 00:04:46.527 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:46.528 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2269878_0 00:04:46.528 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:46.528 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:46.528 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:46.528 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:46.528 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:46.528 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2269878_0 00:04:46.528 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:46.528 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2269878 00:04:46.528 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:46.528 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2269878 00:04:46.528 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:46.528 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:46.528 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:46.528 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:46.528 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:46.528 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:46.528 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:46.528 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:46.528 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:46.528 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2269878 00:04:46.528 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:46.528 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2269878 00:04:46.528 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:46.528 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2269878 00:04:46.528 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:46.528 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2269878 00:04:46.528 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:46.528 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2269878 00:04:46.528 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:46.528 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2269878 00:04:46.528 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:46.528 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:46.528 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:46.528 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:46.528 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:46.528 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:46.528 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:46.528 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2269878 00:04:46.528 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:46.528 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2269878 00:04:46.528 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:46.528 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:46.528 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:46.528 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:46.528 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:46.528 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2269878 00:04:46.528 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:46.528 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:46.528 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:46.528 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2269878 00:04:46.528 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:46.528 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2269878 00:04:46.528 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:46.528 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2269878 00:04:46.528 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:46.528 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:46.528 03:52:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:46.528 03:52:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2269878 00:04:46.528 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2269878 ']' 00:04:46.528 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2269878 00:04:46.528 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:46.528 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.528 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2269878 00:04:46.528 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.528 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.528 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2269878' 00:04:46.528 killing process with pid 2269878 00:04:46.528 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2269878 00:04:46.528 03:52:40 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2269878 00:04:47.092 00:04:47.092 real 0m1.196s 00:04:47.092 user 0m1.175s 00:04:47.092 sys 0m0.445s 00:04:47.092 03:52:41 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.092 03:52:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:47.092 ************************************ 00:04:47.092 END TEST dpdk_mem_utility 00:04:47.092 ************************************ 00:04:47.092 03:52:41 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:47.092 03:52:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.092 03:52:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.092 03:52:41 -- common/autotest_common.sh@10 -- # set +x 00:04:47.092 ************************************ 00:04:47.092 START TEST event 00:04:47.092 ************************************ 00:04:47.092 03:52:41 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:47.092 * Looking for test storage... 00:04:47.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:47.092 03:52:41 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.092 03:52:41 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.092 03:52:41 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.092 03:52:41 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.092 03:52:41 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.092 03:52:41 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.092 03:52:41 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.092 03:52:41 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.092 03:52:41 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.092 03:52:41 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.092 03:52:41 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.093 03:52:41 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.093 03:52:41 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.093 03:52:41 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.093 03:52:41 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.093 03:52:41 event -- scripts/common.sh@344 -- # case "$op" in 00:04:47.093 03:52:41 event -- scripts/common.sh@345 -- # : 1 00:04:47.093 03:52:41 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.093 03:52:41 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.093 03:52:41 event -- scripts/common.sh@365 -- # decimal 1 00:04:47.093 03:52:41 event -- scripts/common.sh@353 -- # local d=1 00:04:47.093 03:52:41 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.093 03:52:41 event -- scripts/common.sh@355 -- # echo 1 00:04:47.093 03:52:41 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.093 03:52:41 event -- scripts/common.sh@366 -- # decimal 2 00:04:47.093 03:52:41 event -- scripts/common.sh@353 -- # local d=2 00:04:47.093 03:52:41 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.093 03:52:41 event -- scripts/common.sh@355 -- # echo 2 00:04:47.093 03:52:41 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.093 03:52:41 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.093 03:52:41 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.093 03:52:41 event -- scripts/common.sh@368 -- # return 0 00:04:47.093 03:52:41 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.093 03:52:41 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.093 --rc genhtml_branch_coverage=1 00:04:47.093 --rc genhtml_function_coverage=1 00:04:47.093 --rc genhtml_legend=1 00:04:47.093 --rc geninfo_all_blocks=1 00:04:47.093 --rc geninfo_unexecuted_blocks=1 00:04:47.093 00:04:47.093 ' 00:04:47.093 03:52:41 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.093 --rc genhtml_branch_coverage=1 00:04:47.093 --rc genhtml_function_coverage=1 00:04:47.093 --rc genhtml_legend=1 00:04:47.093 --rc geninfo_all_blocks=1 00:04:47.093 --rc geninfo_unexecuted_blocks=1 00:04:47.093 00:04:47.093 ' 00:04:47.093 03:52:41 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.093 --rc genhtml_branch_coverage=1 00:04:47.093 --rc genhtml_function_coverage=1 00:04:47.093 --rc genhtml_legend=1 00:04:47.093 --rc geninfo_all_blocks=1 00:04:47.093 --rc geninfo_unexecuted_blocks=1 00:04:47.093 00:04:47.093 ' 00:04:47.093 03:52:41 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.093 --rc genhtml_branch_coverage=1 00:04:47.093 --rc genhtml_function_coverage=1 00:04:47.093 --rc genhtml_legend=1 00:04:47.093 --rc geninfo_all_blocks=1 00:04:47.093 --rc geninfo_unexecuted_blocks=1 00:04:47.093 00:04:47.093 ' 00:04:47.093 03:52:41 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:47.093 03:52:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:47.093 03:52:41 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.093 03:52:41 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:47.093 03:52:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.093 03:52:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.093 ************************************ 00:04:47.093 START TEST event_perf 00:04:47.093 ************************************ 00:04:47.093 03:52:41 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.093 Running I/O for 1 seconds...[2024-12-10 03:52:41.411211] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:47.093 [2024-12-10 03:52:41.411276] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270134 ] 00:04:47.351 [2024-12-10 03:52:41.479087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.351 [2024-12-10 03:52:41.537940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.351 [2024-12-10 03:52:41.538048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.351 [2024-12-10 03:52:41.538123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.351 [2024-12-10 03:52:41.538129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.283 Running I/O for 1 seconds... 00:04:48.283 lcore 0: 234951 00:04:48.283 lcore 1: 234949 00:04:48.283 lcore 2: 234951 00:04:48.283 lcore 3: 234951 00:04:48.283 done. 00:04:48.283 00:04:48.283 real 0m1.211s 00:04:48.283 user 0m4.128s 00:04:48.283 sys 0m0.078s 00:04:48.283 03:52:42 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.283 03:52:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:48.284 ************************************ 00:04:48.284 END TEST event_perf 00:04:48.284 ************************************ 00:04:48.284 03:52:42 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:48.284 03:52:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:48.284 03:52:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.284 03:52:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.284 ************************************ 00:04:48.284 START TEST event_reactor 00:04:48.284 ************************************ 00:04:48.284 03:52:42 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:48.284 [2024-12-10 03:52:42.661627] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:48.284 [2024-12-10 03:52:42.661685] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270347 ] 00:04:48.543 [2024-12-10 03:52:42.727270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.543 [2024-12-10 03:52:42.782459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.476 test_start 00:04:49.476 oneshot 00:04:49.476 tick 100 00:04:49.476 tick 100 00:04:49.476 tick 250 00:04:49.476 tick 100 00:04:49.476 tick 100 00:04:49.476 tick 100 00:04:49.476 tick 250 00:04:49.476 tick 500 00:04:49.476 tick 100 00:04:49.476 tick 100 00:04:49.476 tick 250 00:04:49.476 tick 100 00:04:49.476 tick 100 00:04:49.476 test_end 00:04:49.476 00:04:49.476 real 0m1.193s 00:04:49.476 user 0m1.130s 00:04:49.476 sys 0m0.059s 00:04:49.476 03:52:43 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.476 03:52:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:49.476 ************************************ 00:04:49.476 END TEST event_reactor 00:04:49.476 ************************************ 00:04:49.735 03:52:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:49.735 03:52:43 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:49.735 03:52:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.735 03:52:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.735 ************************************ 00:04:49.735 START TEST event_reactor_perf 00:04:49.735 ************************************ 00:04:49.735 03:52:43 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:49.735 [2024-12-10 03:52:43.906512] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:49.735 [2024-12-10 03:52:43.906610] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270505 ] 00:04:49.735 [2024-12-10 03:52:43.972537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.735 [2024-12-10 03:52:44.028453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.127 test_start 00:04:51.127 test_end 00:04:51.127 Performance: 445112 events per second 00:04:51.127 00:04:51.127 real 0m1.200s 00:04:51.127 user 0m1.130s 00:04:51.127 sys 0m0.065s 00:04:51.127 03:52:45 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.127 03:52:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.127 ************************************ 00:04:51.127 END TEST event_reactor_perf 00:04:51.127 ************************************ 00:04:51.127 03:52:45 event -- event/event.sh@49 -- # uname -s 00:04:51.127 03:52:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:51.127 03:52:45 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:51.127 03:52:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.127 03:52:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.127 03:52:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.127 ************************************ 00:04:51.127 START TEST event_scheduler 00:04:51.127 ************************************ 00:04:51.127 03:52:45 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:51.127 * Looking for test storage... 00:04:51.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:51.127 03:52:45 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.127 03:52:45 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.127 03:52:45 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.127 03:52:45 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.127 03:52:45 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:51.127 03:52:45 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.127 03:52:45 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.127 --rc genhtml_branch_coverage=1 00:04:51.127 --rc genhtml_function_coverage=1 00:04:51.127 --rc genhtml_legend=1 00:04:51.127 --rc geninfo_all_blocks=1 00:04:51.127 --rc geninfo_unexecuted_blocks=1 00:04:51.127 00:04:51.127 ' 00:04:51.127 03:52:45 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.127 --rc genhtml_branch_coverage=1 00:04:51.127 --rc genhtml_function_coverage=1 00:04:51.127 --rc genhtml_legend=1 00:04:51.127 --rc geninfo_all_blocks=1 00:04:51.127 --rc geninfo_unexecuted_blocks=1 00:04:51.127 00:04:51.127 ' 00:04:51.127 03:52:45 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.127 --rc genhtml_branch_coverage=1 00:04:51.127 --rc genhtml_function_coverage=1 00:04:51.127 --rc genhtml_legend=1 00:04:51.127 --rc geninfo_all_blocks=1 00:04:51.127 --rc geninfo_unexecuted_blocks=1 00:04:51.127 00:04:51.127 ' 00:04:51.127 03:52:45 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.127 --rc genhtml_branch_coverage=1 00:04:51.127 --rc genhtml_function_coverage=1 00:04:51.127 --rc genhtml_legend=1 00:04:51.127 --rc geninfo_all_blocks=1 00:04:51.127 --rc geninfo_unexecuted_blocks=1 00:04:51.127 00:04:51.127 ' 00:04:51.127 03:52:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:51.127 03:52:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2270698 00:04:51.127 03:52:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.127 03:52:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:51.127 03:52:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2270698 00:04:51.127 03:52:45 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2270698 ']' 00:04:51.127 03:52:45 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.127 03:52:45 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.127 03:52:45 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.127 03:52:45 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.127 03:52:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.127 [2024-12-10 03:52:45.333795] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:51.127 [2024-12-10 03:52:45.333886] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270698 ] 00:04:51.127 [2024-12-10 03:52:45.401906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:51.127 [2024-12-10 03:52:45.462154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.127 [2024-12-10 03:52:45.462257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.127 [2024-12-10 03:52:45.462349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:51.127 [2024-12-10 03:52:45.462360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.433 03:52:45 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.433 03:52:45 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:51.433 03:52:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:51.433 03:52:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.433 03:52:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.433 [2024-12-10 03:52:45.563357] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:51.433 [2024-12-10 03:52:45.563387] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:51.433 [2024-12-10 03:52:45.563420] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:51.433 [2024-12-10 03:52:45.563432] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:51.433 [2024-12-10 03:52:45.563442] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:51.433 03:52:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.433 03:52:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:51.433 03:52:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.433 03:52:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.433 [2024-12-10 03:52:45.664861] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:51.433 03:52:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.433 03:52:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:51.433 03:52:45 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.433 03:52:45 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.433 03:52:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.433 ************************************ 00:04:51.433 START TEST scheduler_create_thread 00:04:51.433 ************************************ 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.433 2 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.433 3 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.433 4 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.433 5 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.433 6 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.433 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.433 7 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.434 8 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.434 9 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.434 10 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.434 03:52:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.023 03:52:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.023 00:04:52.023 real 0m0.591s 00:04:52.023 user 0m0.011s 00:04:52.023 sys 0m0.004s 00:04:52.023 03:52:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.023 03:52:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.023 ************************************ 00:04:52.023 END TEST scheduler_create_thread 00:04:52.023 ************************************ 00:04:52.023 03:52:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:52.023 03:52:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2270698 00:04:52.023 03:52:46 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2270698 ']' 00:04:52.023 03:52:46 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2270698 00:04:52.023 03:52:46 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:52.023 03:52:46 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.023 03:52:46 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2270698 00:04:52.023 03:52:46 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:52.023 03:52:46 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:52.023 03:52:46 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2270698' 00:04:52.023 killing process with pid 2270698 00:04:52.023 03:52:46 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2270698 00:04:52.024 03:52:46 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2270698 00:04:52.589 [2024-12-10 03:52:46.761322] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:52.589 00:04:52.589 real 0m1.827s 00:04:52.589 user 0m2.452s 00:04:52.589 sys 0m0.352s 00:04:52.589 03:52:46 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.589 03:52:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.589 ************************************ 00:04:52.589 END TEST event_scheduler 00:04:52.589 ************************************ 00:04:52.847 03:52:46 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:52.847 03:52:46 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:52.847 03:52:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.847 03:52:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.847 03:52:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.847 ************************************ 00:04:52.847 START TEST app_repeat 00:04:52.847 ************************************ 00:04:52.847 03:52:47 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:52.847 03:52:47 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.847 03:52:47 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.847 03:52:47 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:52.847 03:52:47 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.847 03:52:47 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:52.847 03:52:47 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:52.848 03:52:47 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:52.848 03:52:47 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2270961 00:04:52.848 03:52:47 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:52.848 03:52:47 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.848 03:52:47 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2270961' 00:04:52.848 Process app_repeat pid: 2270961 00:04:52.848 03:52:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.848 03:52:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:52.848 spdk_app_start Round 0 00:04:52.848 03:52:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2270961 /var/tmp/spdk-nbd.sock 00:04:52.848 03:52:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2270961 ']' 00:04:52.848 03:52:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.848 03:52:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.848 03:52:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.848 03:52:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.848 03:52:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.848 [2024-12-10 03:52:47.049572] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:52.848 [2024-12-10 03:52:47.049646] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270961 ] 00:04:52.848 [2024-12-10 03:52:47.117099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.848 [2024-12-10 03:52:47.177070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.848 [2024-12-10 03:52:47.177075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.105 03:52:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.105 03:52:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:53.105 03:52:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.363 Malloc0 00:04:53.363 03:52:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.621 Malloc1 00:04:53.621 03:52:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.621 03:52:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.621 03:52:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.621 03:52:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.621 03:52:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.621 03:52:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.621 03:52:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.621 03:52:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.621 03:52:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.621 03:52:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.621 03:52:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.621 03:52:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.621 03:52:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:53.621 03:52:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.621 03:52:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.621 03:52:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.879 /dev/nbd0 00:04:53.879 03:52:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:53.879 03:52:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:53.879 03:52:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:53.879 03:52:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:53.879 03:52:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:53.879 03:52:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:53.879 03:52:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:53.879 03:52:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:53.879 03:52:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:53.879 03:52:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:53.879 03:52:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.879 1+0 records in 00:04:53.879 1+0 records out 00:04:53.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212637 s, 19.3 MB/s 00:04:53.879 03:52:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.879 03:52:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:53.879 03:52:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.879 03:52:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:53.879 03:52:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:53.879 03:52:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.879 03:52:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.879 03:52:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.137 /dev/nbd1 00:04:54.137 03:52:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.137 03:52:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.137 03:52:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:54.137 03:52:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.137 03:52:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.137 03:52:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.137 03:52:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:54.137 03:52:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.137 03:52:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.137 03:52:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.137 03:52:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.394 1+0 records in 00:04:54.394 1+0 records out 00:04:54.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289072 s, 14.2 MB/s 00:04:54.394 03:52:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.394 03:52:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.394 03:52:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.394 03:52:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.394 03:52:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.394 03:52:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.394 03:52:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.394 03:52:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.394 03:52:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.394 03:52:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.653 { 00:04:54.653 "nbd_device": "/dev/nbd0", 00:04:54.653 "bdev_name": "Malloc0" 00:04:54.653 }, 00:04:54.653 { 00:04:54.653 "nbd_device": "/dev/nbd1", 00:04:54.653 "bdev_name": "Malloc1" 00:04:54.653 } 00:04:54.653 ]' 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.653 { 00:04:54.653 "nbd_device": "/dev/nbd0", 00:04:54.653 "bdev_name": "Malloc0" 00:04:54.653 }, 00:04:54.653 { 00:04:54.653 "nbd_device": "/dev/nbd1", 00:04:54.653 "bdev_name": "Malloc1" 00:04:54.653 } 00:04:54.653 ]' 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.653 /dev/nbd1' 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.653 /dev/nbd1' 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.653 256+0 records in 00:04:54.653 256+0 records out 00:04:54.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00510175 s, 206 MB/s 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.653 256+0 records in 00:04:54.653 256+0 records out 00:04:54.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196502 s, 53.4 MB/s 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.653 256+0 records in 00:04:54.653 256+0 records out 00:04:54.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218644 s, 48.0 MB/s 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.653 03:52:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.911 03:52:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.911 03:52:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.911 03:52:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.911 03:52:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.911 03:52:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.911 03:52:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:54.911 03:52:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.911 03:52:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.911 03:52:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.911 03:52:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.168 03:52:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.168 03:52:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.168 03:52:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.168 03:52:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.168 03:52:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.168 03:52:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.168 03:52:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.168 03:52:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.168 03:52:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.168 03:52:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.168 03:52:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.426 03:52:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.426 03:52:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.426 03:52:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.684 03:52:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.684 03:52:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.684 03:52:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.684 03:52:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.684 03:52:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.684 03:52:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.684 03:52:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.684 03:52:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.684 03:52:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.684 03:52:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.943 03:52:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:56.201 [2024-12-10 03:52:50.331819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.201 [2024-12-10 03:52:50.386889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.201 [2024-12-10 03:52:50.386891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.201 [2024-12-10 03:52:50.442618] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:56.201 [2024-12-10 03:52:50.442686] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.481 03:52:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:59.481 03:52:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:59.482 spdk_app_start Round 1 00:04:59.482 03:52:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2270961 /var/tmp/spdk-nbd.sock 00:04:59.482 03:52:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2270961 ']' 00:04:59.482 03:52:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.482 03:52:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.482 03:52:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.482 03:52:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.482 03:52:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.482 03:52:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.482 03:52:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:59.482 03:52:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.482 Malloc0 00:04:59.482 03:52:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.740 Malloc1 00:04:59.740 03:52:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.740 03:52:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.740 03:52:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.740 03:52:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.740 03:52:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.740 03:52:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.740 03:52:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.740 03:52:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.740 03:52:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.740 03:52:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.740 03:52:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.740 03:52:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.740 03:52:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.740 03:52:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.740 03:52:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.740 03:52:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.998 /dev/nbd0 00:04:59.998 03:52:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.998 03:52:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.998 03:52:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:59.998 03:52:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:59.998 03:52:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:59.998 03:52:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:59.998 03:52:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:59.998 03:52:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:59.998 03:52:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:59.998 03:52:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:59.998 03:52:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.998 1+0 records in 00:04:59.998 1+0 records out 00:04:59.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000147536 s, 27.8 MB/s 00:04:59.998 03:52:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.998 03:52:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:59.998 03:52:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.998 03:52:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:59.998 03:52:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:59.998 03:52:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.998 03:52:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.998 03:52:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.256 /dev/nbd1 00:05:00.256 03:52:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.256 03:52:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.256 03:52:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:00.256 03:52:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:00.256 03:52:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:00.256 03:52:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:00.256 03:52:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:00.256 03:52:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:00.256 03:52:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:00.256 03:52:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:00.256 03:52:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.256 1+0 records in 00:05:00.256 1+0 records out 00:05:00.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165158 s, 24.8 MB/s 00:05:00.256 03:52:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.256 03:52:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:00.256 03:52:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.256 03:52:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:00.256 03:52:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:00.256 03:52:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.256 03:52:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.256 03:52:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.256 03:52:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.256 03:52:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:00.514 { 00:05:00.514 "nbd_device": "/dev/nbd0", 00:05:00.514 "bdev_name": "Malloc0" 00:05:00.514 }, 00:05:00.514 { 00:05:00.514 "nbd_device": "/dev/nbd1", 00:05:00.514 "bdev_name": "Malloc1" 00:05:00.514 } 00:05:00.514 ]' 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:00.514 { 00:05:00.514 "nbd_device": "/dev/nbd0", 00:05:00.514 "bdev_name": "Malloc0" 00:05:00.514 }, 00:05:00.514 { 00:05:00.514 "nbd_device": "/dev/nbd1", 00:05:00.514 "bdev_name": "Malloc1" 00:05:00.514 } 00:05:00.514 ]' 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:00.514 /dev/nbd1' 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:00.514 /dev/nbd1' 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:00.514 256+0 records in 00:05:00.514 256+0 records out 00:05:00.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463382 s, 226 MB/s 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.514 03:52:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:00.772 256+0 records in 00:05:00.772 256+0 records out 00:05:00.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208125 s, 50.4 MB/s 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:00.772 256+0 records in 00:05:00.772 256+0 records out 00:05:00.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219063 s, 47.9 MB/s 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.772 03:52:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.029 03:52:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.029 03:52:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.029 03:52:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.029 03:52:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.029 03:52:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.029 03:52:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.029 03:52:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.029 03:52:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.029 03:52:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.029 03:52:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.287 03:52:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.287 03:52:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.287 03:52:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.287 03:52:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.287 03:52:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.287 03:52:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.287 03:52:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.287 03:52:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.287 03:52:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.287 03:52:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.287 03:52:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.545 03:52:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.545 03:52:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.545 03:52:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.545 03:52:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.545 03:52:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.545 03:52:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.545 03:52:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:01.545 03:52:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.545 03:52:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.545 03:52:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.545 03:52:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.545 03:52:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.545 03:52:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.803 03:52:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.061 [2024-12-10 03:52:56.355490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.061 [2024-12-10 03:52:56.408984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.061 [2024-12-10 03:52:56.408984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.318 [2024-12-10 03:52:56.468443] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.319 [2024-12-10 03:52:56.468519] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.844 03:52:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.844 03:52:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:04.844 spdk_app_start Round 2 00:05:04.844 03:52:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2270961 /var/tmp/spdk-nbd.sock 00:05:04.844 03:52:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2270961 ']' 00:05:04.844 03:52:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.844 03:52:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.844 03:52:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.844 03:52:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.844 03:52:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.102 03:52:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.102 03:52:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:05.102 03:52:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.360 Malloc0 00:05:05.360 03:52:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.618 Malloc1 00:05:05.618 03:52:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.618 03:52:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.618 03:52:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.618 03:52:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.618 03:52:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.618 03:52:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.618 03:52:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.618 03:52:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.618 03:52:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.618 03:52:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.618 03:52:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.618 03:52:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.618 03:52:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.618 03:52:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.618 03:52:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.618 03:52:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.184 /dev/nbd0 00:05:06.184 03:53:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.184 03:53:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.184 03:53:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:06.184 03:53:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.184 03:53:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.184 03:53:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.184 03:53:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:06.184 03:53:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.184 03:53:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.184 03:53:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.184 03:53:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.184 1+0 records in 00:05:06.184 1+0 records out 00:05:06.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000144979 s, 28.3 MB/s 00:05:06.184 03:53:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.184 03:53:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.184 03:53:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.184 03:53:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.184 03:53:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.184 03:53:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.184 03:53:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.184 03:53:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.442 /dev/nbd1 00:05:06.442 03:53:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.442 03:53:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.442 03:53:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:06.442 03:53:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.442 03:53:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.442 03:53:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.442 03:53:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:06.442 03:53:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.442 03:53:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.442 03:53:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.443 03:53:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.443 1+0 records in 00:05:06.443 1+0 records out 00:05:06.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207953 s, 19.7 MB/s 00:05:06.443 03:53:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.443 03:53:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.443 03:53:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.443 03:53:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.443 03:53:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.443 03:53:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.443 03:53:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.443 03:53:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.443 03:53:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.443 03:53:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.701 { 00:05:06.701 "nbd_device": "/dev/nbd0", 00:05:06.701 "bdev_name": "Malloc0" 00:05:06.701 }, 00:05:06.701 { 00:05:06.701 "nbd_device": "/dev/nbd1", 00:05:06.701 "bdev_name": "Malloc1" 00:05:06.701 } 00:05:06.701 ]' 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.701 { 00:05:06.701 "nbd_device": "/dev/nbd0", 00:05:06.701 "bdev_name": "Malloc0" 00:05:06.701 }, 00:05:06.701 { 00:05:06.701 "nbd_device": "/dev/nbd1", 00:05:06.701 "bdev_name": "Malloc1" 00:05:06.701 } 00:05:06.701 ]' 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.701 /dev/nbd1' 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.701 /dev/nbd1' 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.701 256+0 records in 00:05:06.701 256+0 records out 00:05:06.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00510593 s, 205 MB/s 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.701 256+0 records in 00:05:06.701 256+0 records out 00:05:06.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019836 s, 52.9 MB/s 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.701 03:53:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.701 256+0 records in 00:05:06.701 256+0 records out 00:05:06.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022164 s, 47.3 MB/s 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.701 03:53:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.959 03:53:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.959 03:53:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.959 03:53:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.959 03:53:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.959 03:53:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.959 03:53:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.959 03:53:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.959 03:53:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.959 03:53:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.959 03:53:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.525 03:53:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.525 03:53:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.525 03:53:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.525 03:53:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.525 03:53:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.525 03:53:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.525 03:53:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.525 03:53:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.525 03:53:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.525 03:53:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.525 03:53:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.525 03:53:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.782 03:53:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.782 03:53:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.782 03:53:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.782 03:53:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.782 03:53:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.782 03:53:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.782 03:53:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.782 03:53:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.782 03:53:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.782 03:53:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.782 03:53:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.782 03:53:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:08.041 03:53:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:08.298 [2024-12-10 03:53:02.457645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.298 [2024-12-10 03:53:02.511565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.298 [2024-12-10 03:53:02.511596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.298 [2024-12-10 03:53:02.563646] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.298 [2024-12-10 03:53:02.563711] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.582 03:53:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2270961 /var/tmp/spdk-nbd.sock 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2270961 ']' 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:11.582 03:53:05 event.app_repeat -- event/event.sh@39 -- # killprocess 2270961 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2270961 ']' 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2270961 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2270961 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2270961' 00:05:11.582 killing process with pid 2270961 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2270961 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2270961 00:05:11.582 spdk_app_start is called in Round 0. 00:05:11.582 Shutdown signal received, stop current app iteration 00:05:11.582 Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 reinitialization... 00:05:11.582 spdk_app_start is called in Round 1. 00:05:11.582 Shutdown signal received, stop current app iteration 00:05:11.582 Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 reinitialization... 00:05:11.582 spdk_app_start is called in Round 2. 00:05:11.582 Shutdown signal received, stop current app iteration 00:05:11.582 Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 reinitialization... 00:05:11.582 spdk_app_start is called in Round 3. 00:05:11.582 Shutdown signal received, stop current app iteration 00:05:11.582 03:53:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:11.582 03:53:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:11.582 00:05:11.582 real 0m18.725s 00:05:11.582 user 0m41.390s 00:05:11.582 sys 0m3.215s 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.582 03:53:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.582 ************************************ 00:05:11.582 END TEST app_repeat 00:05:11.582 ************************************ 00:05:11.582 03:53:05 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:11.582 03:53:05 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:11.582 03:53:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.582 03:53:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.582 03:53:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.582 ************************************ 00:05:11.582 START TEST cpu_locks 00:05:11.582 ************************************ 00:05:11.582 03:53:05 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:11.582 * Looking for test storage... 00:05:11.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:11.582 03:53:05 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:11.582 03:53:05 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:11.582 03:53:05 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:11.582 03:53:05 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.582 03:53:05 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:11.582 03:53:05 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.582 03:53:05 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:11.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.582 --rc genhtml_branch_coverage=1 00:05:11.582 --rc genhtml_function_coverage=1 00:05:11.582 --rc genhtml_legend=1 00:05:11.582 --rc geninfo_all_blocks=1 00:05:11.582 --rc geninfo_unexecuted_blocks=1 00:05:11.582 00:05:11.582 ' 00:05:11.582 03:53:05 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:11.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.582 --rc genhtml_branch_coverage=1 00:05:11.582 --rc genhtml_function_coverage=1 00:05:11.582 --rc genhtml_legend=1 00:05:11.582 --rc geninfo_all_blocks=1 00:05:11.582 --rc geninfo_unexecuted_blocks=1 00:05:11.582 00:05:11.582 ' 00:05:11.582 03:53:05 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:11.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.582 --rc genhtml_branch_coverage=1 00:05:11.582 --rc genhtml_function_coverage=1 00:05:11.582 --rc genhtml_legend=1 00:05:11.582 --rc geninfo_all_blocks=1 00:05:11.582 --rc geninfo_unexecuted_blocks=1 00:05:11.582 00:05:11.582 ' 00:05:11.582 03:53:05 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:11.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.582 --rc genhtml_branch_coverage=1 00:05:11.582 --rc genhtml_function_coverage=1 00:05:11.582 --rc genhtml_legend=1 00:05:11.582 --rc geninfo_all_blocks=1 00:05:11.582 --rc geninfo_unexecuted_blocks=1 00:05:11.582 00:05:11.582 ' 00:05:11.582 03:53:05 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:11.582 03:53:05 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:11.582 03:53:05 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:11.582 03:53:05 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:11.582 03:53:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.582 03:53:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.582 03:53:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.582 ************************************ 00:05:11.582 START TEST default_locks 00:05:11.582 ************************************ 00:05:11.582 03:53:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:11.583 03:53:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2273385 00:05:11.583 03:53:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.583 03:53:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2273385 00:05:11.583 03:53:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2273385 ']' 00:05:11.583 03:53:05 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.583 03:53:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.583 03:53:05 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.583 03:53:05 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.583 03:53:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.842 [2024-12-10 03:53:06.017117] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:11.842 [2024-12-10 03:53:06.017207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273385 ] 00:05:11.842 [2024-12-10 03:53:06.087045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.842 [2024-12-10 03:53:06.142592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.100 03:53:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.100 03:53:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:12.100 03:53:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2273385 00:05:12.100 03:53:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2273385 00:05:12.100 03:53:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.358 lslocks: write error 00:05:12.358 03:53:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2273385 00:05:12.358 03:53:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2273385 ']' 00:05:12.358 03:53:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2273385 00:05:12.358 03:53:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:12.358 03:53:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.358 03:53:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273385 00:05:12.616 03:53:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.616 03:53:06 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.616 03:53:06 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273385' 00:05:12.616 killing process with pid 2273385 00:05:12.616 03:53:06 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2273385 00:05:12.616 03:53:06 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2273385 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2273385 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2273385 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2273385 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2273385 ']' 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2273385) - No such process 00:05:12.876 ERROR: process (pid: 2273385) is no longer running 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:12.876 00:05:12.876 real 0m1.234s 00:05:12.876 user 0m1.180s 00:05:12.876 sys 0m0.551s 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.876 03:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.876 ************************************ 00:05:12.876 END TEST default_locks 00:05:12.876 ************************************ 00:05:12.876 03:53:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:12.876 03:53:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.876 03:53:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.877 03:53:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.877 ************************************ 00:05:12.877 START TEST default_locks_via_rpc 00:05:12.877 ************************************ 00:05:12.877 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:12.877 03:53:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2273665 00:05:12.877 03:53:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.877 03:53:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2273665 00:05:12.877 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2273665 ']' 00:05:12.877 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.877 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.877 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.877 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.877 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.135 [2024-12-10 03:53:07.298895] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:13.135 [2024-12-10 03:53:07.298984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273665 ] 00:05:13.135 [2024-12-10 03:53:07.364766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.135 [2024-12-10 03:53:07.421348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2273665 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2273665 00:05:13.394 03:53:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.652 03:53:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2273665 00:05:13.652 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2273665 ']' 00:05:13.652 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2273665 00:05:13.652 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:13.652 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.652 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273665 00:05:13.652 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.652 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.652 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273665' 00:05:13.652 killing process with pid 2273665 00:05:13.652 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2273665 00:05:13.652 03:53:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2273665 00:05:14.218 00:05:14.218 real 0m1.163s 00:05:14.218 user 0m1.114s 00:05:14.218 sys 0m0.504s 00:05:14.218 03:53:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.218 03:53:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.218 ************************************ 00:05:14.218 END TEST default_locks_via_rpc 00:05:14.218 ************************************ 00:05:14.218 03:53:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:14.218 03:53:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.218 03:53:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.218 03:53:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.218 ************************************ 00:05:14.218 START TEST non_locking_app_on_locked_coremask 00:05:14.218 ************************************ 00:05:14.218 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:14.218 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2273831 00:05:14.218 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.218 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2273831 /var/tmp/spdk.sock 00:05:14.218 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2273831 ']' 00:05:14.218 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.218 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.218 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.218 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.218 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.218 [2024-12-10 03:53:08.516144] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:14.218 [2024-12-10 03:53:08.516235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273831 ] 00:05:14.218 [2024-12-10 03:53:08.583694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.476 [2024-12-10 03:53:08.641375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.735 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.735 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:14.735 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2273839 00:05:14.735 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:14.735 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2273839 /var/tmp/spdk2.sock 00:05:14.735 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2273839 ']' 00:05:14.735 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.735 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.735 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.735 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.735 03:53:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.735 [2024-12-10 03:53:08.954953] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:14.735 [2024-12-10 03:53:08.955041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273839 ] 00:05:14.735 [2024-12-10 03:53:09.052691] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.735 [2024-12-10 03:53:09.052732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.993 [2024-12-10 03:53:09.169566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.560 03:53:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.560 03:53:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:15.560 03:53:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2273831 00:05:15.560 03:53:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2273831 00:05:15.560 03:53:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.125 lslocks: write error 00:05:16.125 03:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2273831 00:05:16.125 03:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2273831 ']' 00:05:16.125 03:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2273831 00:05:16.384 03:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:16.384 03:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.384 03:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273831 00:05:16.384 03:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.384 03:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.384 03:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273831' 00:05:16.384 killing process with pid 2273831 00:05:16.384 03:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2273831 00:05:16.384 03:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2273831 00:05:17.317 03:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2273839 00:05:17.317 03:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2273839 ']' 00:05:17.317 03:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2273839 00:05:17.317 03:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:17.317 03:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.317 03:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273839 00:05:17.317 03:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.317 03:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.317 03:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273839' 00:05:17.317 killing process with pid 2273839 00:05:17.317 03:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2273839 00:05:17.317 03:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2273839 00:05:17.575 00:05:17.575 real 0m3.349s 00:05:17.575 user 0m3.587s 00:05:17.575 sys 0m1.038s 00:05:17.575 03:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.575 03:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.575 ************************************ 00:05:17.575 END TEST non_locking_app_on_locked_coremask 00:05:17.575 ************************************ 00:05:17.575 03:53:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:17.575 03:53:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.575 03:53:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.575 03:53:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.575 ************************************ 00:05:17.575 START TEST locking_app_on_unlocked_coremask 00:05:17.575 ************************************ 00:05:17.575 03:53:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:17.575 03:53:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2274265 00:05:17.575 03:53:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:17.575 03:53:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2274265 /var/tmp/spdk.sock 00:05:17.575 03:53:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2274265 ']' 00:05:17.575 03:53:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.575 03:53:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.575 03:53:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.575 03:53:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.575 03:53:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.575 [2024-12-10 03:53:11.916363] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:17.575 [2024-12-10 03:53:11.916454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274265 ] 00:05:17.833 [2024-12-10 03:53:11.983859] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:17.834 [2024-12-10 03:53:11.983906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.834 [2024-12-10 03:53:12.041688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.092 03:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.092 03:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:18.092 03:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2274273 00:05:18.092 03:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:18.092 03:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2274273 /var/tmp/spdk2.sock 00:05:18.092 03:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2274273 ']' 00:05:18.092 03:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.092 03:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.092 03:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.092 03:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.092 03:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.092 [2024-12-10 03:53:12.370920] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:18.092 [2024-12-10 03:53:12.371009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274273 ] 00:05:18.092 [2024-12-10 03:53:12.473701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.350 [2024-12-10 03:53:12.585579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.283 03:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.283 03:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:19.283 03:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2274273 00:05:19.283 03:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2274273 00:05:19.283 03:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.541 lslocks: write error 00:05:19.541 03:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2274265 00:05:19.541 03:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2274265 ']' 00:05:19.541 03:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2274265 00:05:19.541 03:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:19.541 03:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.541 03:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2274265 00:05:19.541 03:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.541 03:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.541 03:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2274265' 00:05:19.541 killing process with pid 2274265 00:05:19.541 03:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2274265 00:05:19.541 03:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2274265 00:05:20.475 03:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2274273 00:05:20.475 03:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2274273 ']' 00:05:20.475 03:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2274273 00:05:20.475 03:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:20.475 03:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.475 03:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2274273 00:05:20.475 03:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.475 03:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.475 03:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2274273' 00:05:20.475 killing process with pid 2274273 00:05:20.475 03:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2274273 00:05:20.475 03:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2274273 00:05:20.733 00:05:20.733 real 0m3.213s 00:05:20.733 user 0m3.453s 00:05:20.733 sys 0m1.014s 00:05:20.733 03:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.733 03:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.733 ************************************ 00:05:20.733 END TEST locking_app_on_unlocked_coremask 00:05:20.733 ************************************ 00:05:20.733 03:53:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:20.733 03:53:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.733 03:53:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.733 03:53:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.991 ************************************ 00:05:20.991 START TEST locking_app_on_locked_coremask 00:05:20.991 ************************************ 00:05:20.991 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:20.991 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2274655 00:05:20.991 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2274655 /var/tmp/spdk.sock 00:05:20.991 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.991 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2274655 ']' 00:05:20.991 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.991 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.991 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.991 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.991 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.991 [2024-12-10 03:53:15.178656] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:20.991 [2024-12-10 03:53:15.178738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274655 ] 00:05:20.991 [2024-12-10 03:53:15.243152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.991 [2024-12-10 03:53:15.296580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2274713 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2274713 /var/tmp/spdk2.sock 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2274713 /var/tmp/spdk2.sock 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2274713 /var/tmp/spdk2.sock 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2274713 ']' 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.249 03:53:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.249 [2024-12-10 03:53:15.605501] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:21.249 [2024-12-10 03:53:15.605611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274713 ] 00:05:21.507 [2024-12-10 03:53:15.703566] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2274655 has claimed it. 00:05:21.507 [2024-12-10 03:53:15.703636] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:22.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2274713) - No such process 00:05:22.073 ERROR: process (pid: 2274713) is no longer running 00:05:22.073 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.073 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:22.073 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:22.073 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.073 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:22.073 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.073 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2274655 00:05:22.073 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2274655 00:05:22.073 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.331 lslocks: write error 00:05:22.331 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2274655 00:05:22.331 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2274655 ']' 00:05:22.331 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2274655 00:05:22.331 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:22.331 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.331 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2274655 00:05:22.331 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.331 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.331 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2274655' 00:05:22.331 killing process with pid 2274655 00:05:22.331 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2274655 00:05:22.331 03:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2274655 00:05:22.897 00:05:22.897 real 0m1.937s 00:05:22.897 user 0m2.188s 00:05:22.897 sys 0m0.610s 00:05:22.897 03:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.897 03:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.897 ************************************ 00:05:22.897 END TEST locking_app_on_locked_coremask 00:05:22.897 ************************************ 00:05:22.897 03:53:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:22.897 03:53:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.897 03:53:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.897 03:53:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.897 ************************************ 00:05:22.897 START TEST locking_overlapped_coremask 00:05:22.897 ************************************ 00:05:22.897 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:22.897 03:53:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2274882 00:05:22.897 03:53:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:22.897 03:53:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2274882 /var/tmp/spdk.sock 00:05:22.897 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2274882 ']' 00:05:22.897 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.897 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.897 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.897 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.897 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.897 [2024-12-10 03:53:17.169199] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:22.897 [2024-12-10 03:53:17.169292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274882 ] 00:05:22.897 [2024-12-10 03:53:17.233466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.156 [2024-12-10 03:53:17.289215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.156 [2024-12-10 03:53:17.289335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.156 [2024-12-10 03:53:17.289326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2275008 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2275008 /var/tmp/spdk2.sock 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2275008 /var/tmp/spdk2.sock 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2275008 /var/tmp/spdk2.sock 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2275008 ']' 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.452 03:53:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.452 [2024-12-10 03:53:17.629187] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:23.452 [2024-12-10 03:53:17.629273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275008 ] 00:05:23.452 [2024-12-10 03:53:17.736028] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2274882 has claimed it. 00:05:23.452 [2024-12-10 03:53:17.736098] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:24.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2275008) - No such process 00:05:24.076 ERROR: process (pid: 2275008) is no longer running 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2274882 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2274882 ']' 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2274882 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2274882 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2274882' 00:05:24.076 killing process with pid 2274882 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2274882 00:05:24.076 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2274882 00:05:24.642 00:05:24.642 real 0m1.688s 00:05:24.642 user 0m4.709s 00:05:24.642 sys 0m0.484s 00:05:24.642 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.642 03:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.642 ************************************ 00:05:24.642 END TEST locking_overlapped_coremask 00:05:24.642 ************************************ 00:05:24.642 03:53:18 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:24.643 03:53:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.643 03:53:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.643 03:53:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.643 ************************************ 00:05:24.643 START TEST locking_overlapped_coremask_via_rpc 00:05:24.643 ************************************ 00:05:24.643 03:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:24.643 03:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2275178 00:05:24.643 03:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:24.643 03:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2275178 /var/tmp/spdk.sock 00:05:24.643 03:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2275178 ']' 00:05:24.643 03:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.643 03:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.643 03:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.643 03:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.643 03:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.643 [2024-12-10 03:53:18.913328] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:24.643 [2024-12-10 03:53:18.913434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275178 ] 00:05:24.643 [2024-12-10 03:53:18.981076] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.643 [2024-12-10 03:53:18.981119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.901 [2024-12-10 03:53:19.044797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.901 [2024-12-10 03:53:19.044854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.901 [2024-12-10 03:53:19.044858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.159 03:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.159 03:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:25.159 03:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2275191 00:05:25.159 03:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2275191 /var/tmp/spdk2.sock 00:05:25.159 03:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2275191 ']' 00:05:25.159 03:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.159 03:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.159 03:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:25.159 03:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.159 03:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.159 03:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.159 [2024-12-10 03:53:19.371796] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:25.159 [2024-12-10 03:53:19.371893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275191 ] 00:05:25.159 [2024-12-10 03:53:19.476474] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.159 [2024-12-10 03:53:19.476507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.417 [2024-12-10 03:53:19.598298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.417 [2024-12-10 03:53:19.601643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:25.417 [2024-12-10 03:53:19.601646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.984 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.984 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:25.984 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:25.984 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.984 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.984 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.984 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:25.984 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:25.984 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:25.984 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:25.984 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.984 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:25.984 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.984 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:25.984 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.984 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.984 [2024-12-10 03:53:20.366658] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2275178 has claimed it. 00:05:26.242 request: 00:05:26.242 { 00:05:26.242 "method": "framework_enable_cpumask_locks", 00:05:26.242 "req_id": 1 00:05:26.242 } 00:05:26.242 Got JSON-RPC error response 00:05:26.242 response: 00:05:26.242 { 00:05:26.242 "code": -32603, 00:05:26.242 "message": "Failed to claim CPU core: 2" 00:05:26.242 } 00:05:26.242 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:26.242 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:26.242 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:26.242 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:26.242 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:26.242 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2275178 /var/tmp/spdk.sock 00:05:26.242 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2275178 ']' 00:05:26.242 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.242 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.242 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.242 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.242 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.500 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.500 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:26.500 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2275191 /var/tmp/spdk2.sock 00:05:26.500 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2275191 ']' 00:05:26.500 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.500 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.500 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.500 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.500 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.758 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.758 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:26.758 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:26.758 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:26.758 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:26.758 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:26.758 00:05:26.758 real 0m2.084s 00:05:26.758 user 0m1.159s 00:05:26.758 sys 0m0.185s 00:05:26.758 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.758 03:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.758 ************************************ 00:05:26.758 END TEST locking_overlapped_coremask_via_rpc 00:05:26.758 ************************************ 00:05:26.758 03:53:20 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:26.758 03:53:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2275178 ]] 00:05:26.758 03:53:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2275178 00:05:26.758 03:53:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2275178 ']' 00:05:26.758 03:53:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2275178 00:05:26.758 03:53:20 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:26.758 03:53:20 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.758 03:53:20 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2275178 00:05:26.758 03:53:20 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.758 03:53:20 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.758 03:53:20 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2275178' 00:05:26.758 killing process with pid 2275178 00:05:26.758 03:53:20 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2275178 00:05:26.758 03:53:20 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2275178 00:05:27.325 03:53:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2275191 ]] 00:05:27.325 03:53:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2275191 00:05:27.325 03:53:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2275191 ']' 00:05:27.325 03:53:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2275191 00:05:27.325 03:53:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:27.325 03:53:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.325 03:53:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2275191 00:05:27.325 03:53:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:27.325 03:53:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:27.325 03:53:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2275191' 00:05:27.325 killing process with pid 2275191 00:05:27.325 03:53:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2275191 00:05:27.325 03:53:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2275191 00:05:27.584 03:53:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:27.584 03:53:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:27.584 03:53:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2275178 ]] 00:05:27.584 03:53:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2275178 00:05:27.584 03:53:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2275178 ']' 00:05:27.584 03:53:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2275178 00:05:27.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2275178) - No such process 00:05:27.584 03:53:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2275178 is not found' 00:05:27.584 Process with pid 2275178 is not found 00:05:27.584 03:53:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2275191 ]] 00:05:27.584 03:53:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2275191 00:05:27.584 03:53:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2275191 ']' 00:05:27.584 03:53:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2275191 00:05:27.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2275191) - No such process 00:05:27.584 03:53:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2275191 is not found' 00:05:27.584 Process with pid 2275191 is not found 00:05:27.584 03:53:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:27.584 00:05:27.584 real 0m16.097s 00:05:27.584 user 0m29.196s 00:05:27.584 sys 0m5.326s 00:05:27.584 03:53:21 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.584 03:53:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.584 ************************************ 00:05:27.584 END TEST cpu_locks 00:05:27.584 ************************************ 00:05:27.584 00:05:27.584 real 0m40.700s 00:05:27.584 user 1m19.639s 00:05:27.584 sys 0m9.355s 00:05:27.584 03:53:21 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.584 03:53:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.584 ************************************ 00:05:27.584 END TEST event 00:05:27.584 ************************************ 00:05:27.584 03:53:21 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:27.584 03:53:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.584 03:53:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.584 03:53:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.843 ************************************ 00:05:27.843 START TEST thread 00:05:27.843 ************************************ 00:05:27.843 03:53:21 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:27.843 * Looking for test storage... 00:05:27.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:27.843 03:53:22 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:27.843 03:53:22 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:27.843 03:53:22 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.843 03:53:22 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:27.843 03:53:22 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.843 03:53:22 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.843 03:53:22 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.843 03:53:22 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.843 03:53:22 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.843 03:53:22 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.843 03:53:22 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.843 03:53:22 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.843 03:53:22 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.843 03:53:22 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.843 03:53:22 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.843 03:53:22 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:27.843 03:53:22 thread -- scripts/common.sh@345 -- # : 1 00:05:27.843 03:53:22 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.843 03:53:22 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.843 03:53:22 thread -- scripts/common.sh@365 -- # decimal 1 00:05:27.843 03:53:22 thread -- scripts/common.sh@353 -- # local d=1 00:05:27.843 03:53:22 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.843 03:53:22 thread -- scripts/common.sh@355 -- # echo 1 00:05:27.843 03:53:22 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.843 03:53:22 thread -- scripts/common.sh@366 -- # decimal 2 00:05:27.843 03:53:22 thread -- scripts/common.sh@353 -- # local d=2 00:05:27.843 03:53:22 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.843 03:53:22 thread -- scripts/common.sh@355 -- # echo 2 00:05:27.843 03:53:22 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.843 03:53:22 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.843 03:53:22 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.843 03:53:22 thread -- scripts/common.sh@368 -- # return 0 00:05:27.843 03:53:22 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.843 03:53:22 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:27.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.843 --rc genhtml_branch_coverage=1 00:05:27.843 --rc genhtml_function_coverage=1 00:05:27.843 --rc genhtml_legend=1 00:05:27.843 --rc geninfo_all_blocks=1 00:05:27.843 --rc geninfo_unexecuted_blocks=1 00:05:27.843 00:05:27.843 ' 00:05:27.843 03:53:22 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:27.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.843 --rc genhtml_branch_coverage=1 00:05:27.843 --rc genhtml_function_coverage=1 00:05:27.843 --rc genhtml_legend=1 00:05:27.843 --rc geninfo_all_blocks=1 00:05:27.843 --rc geninfo_unexecuted_blocks=1 00:05:27.843 00:05:27.843 ' 00:05:27.843 03:53:22 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:27.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.843 --rc genhtml_branch_coverage=1 00:05:27.843 --rc genhtml_function_coverage=1 00:05:27.843 --rc genhtml_legend=1 00:05:27.843 --rc geninfo_all_blocks=1 00:05:27.843 --rc geninfo_unexecuted_blocks=1 00:05:27.843 00:05:27.843 ' 00:05:27.843 03:53:22 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:27.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.843 --rc genhtml_branch_coverage=1 00:05:27.843 --rc genhtml_function_coverage=1 00:05:27.843 --rc genhtml_legend=1 00:05:27.843 --rc geninfo_all_blocks=1 00:05:27.843 --rc geninfo_unexecuted_blocks=1 00:05:27.843 00:05:27.843 ' 00:05:27.843 03:53:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:27.843 03:53:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:27.843 03:53:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.843 03:53:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.843 ************************************ 00:05:27.843 START TEST thread_poller_perf 00:05:27.843 ************************************ 00:05:27.843 03:53:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:27.843 [2024-12-10 03:53:22.156944] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:27.843 [2024-12-10 03:53:22.157011] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275681 ] 00:05:27.843 [2024-12-10 03:53:22.224768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.101 [2024-12-10 03:53:22.284082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.101 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:29.035 [2024-12-10T02:53:23.424Z] ====================================== 00:05:29.035 [2024-12-10T02:53:23.424Z] busy:2711541009 (cyc) 00:05:29.035 [2024-12-10T02:53:23.424Z] total_run_count: 365000 00:05:29.035 [2024-12-10T02:53:23.424Z] tsc_hz: 2700000000 (cyc) 00:05:29.035 [2024-12-10T02:53:23.424Z] ====================================== 00:05:29.035 [2024-12-10T02:53:23.424Z] poller_cost: 7428 (cyc), 2751 (nsec) 00:05:29.035 00:05:29.035 real 0m1.210s 00:05:29.035 user 0m1.132s 00:05:29.035 sys 0m0.073s 00:05:29.035 03:53:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.035 03:53:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.035 ************************************ 00:05:29.035 END TEST thread_poller_perf 00:05:29.035 ************************************ 00:05:29.035 03:53:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:29.035 03:53:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:29.035 03:53:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.035 03:53:23 thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.035 ************************************ 00:05:29.035 START TEST thread_poller_perf 00:05:29.035 ************************************ 00:05:29.035 03:53:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:29.035 [2024-12-10 03:53:23.417321] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:29.035 [2024-12-10 03:53:23.417387] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275839 ] 00:05:29.297 [2024-12-10 03:53:23.483453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.297 [2024-12-10 03:53:23.544504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.297 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:30.235 [2024-12-10T02:53:24.624Z] ====================================== 00:05:30.235 [2024-12-10T02:53:24.624Z] busy:2702352663 (cyc) 00:05:30.235 [2024-12-10T02:53:24.624Z] total_run_count: 4452000 00:05:30.235 [2024-12-10T02:53:24.624Z] tsc_hz: 2700000000 (cyc) 00:05:30.235 [2024-12-10T02:53:24.624Z] ====================================== 00:05:30.235 [2024-12-10T02:53:24.624Z] poller_cost: 606 (cyc), 224 (nsec) 00:05:30.235 00:05:30.235 real 0m1.206s 00:05:30.235 user 0m1.138s 00:05:30.235 sys 0m0.064s 00:05:30.235 03:53:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.235 03:53:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.235 ************************************ 00:05:30.235 END TEST thread_poller_perf 00:05:30.235 ************************************ 00:05:30.493 03:53:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:30.493 00:05:30.493 real 0m2.665s 00:05:30.493 user 0m2.409s 00:05:30.493 sys 0m0.260s 00:05:30.493 03:53:24 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.493 03:53:24 thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.493 ************************************ 00:05:30.493 END TEST thread 00:05:30.493 ************************************ 00:05:30.493 03:53:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:30.493 03:53:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:30.493 03:53:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.493 03:53:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.493 03:53:24 -- common/autotest_common.sh@10 -- # set +x 00:05:30.493 ************************************ 00:05:30.493 START TEST app_cmdline 00:05:30.493 ************************************ 00:05:30.493 03:53:24 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:30.493 * Looking for test storage... 00:05:30.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:30.493 03:53:24 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:30.493 03:53:24 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:30.493 03:53:24 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:30.493 03:53:24 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:30.493 03:53:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.493 03:53:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.493 03:53:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.493 03:53:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.493 03:53:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.493 03:53:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.493 03:53:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.493 03:53:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.494 03:53:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:30.494 03:53:24 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.494 03:53:24 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:30.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.494 --rc genhtml_branch_coverage=1 00:05:30.494 --rc genhtml_function_coverage=1 00:05:30.494 --rc genhtml_legend=1 00:05:30.494 --rc geninfo_all_blocks=1 00:05:30.494 --rc geninfo_unexecuted_blocks=1 00:05:30.494 00:05:30.494 ' 00:05:30.494 03:53:24 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:30.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.494 --rc genhtml_branch_coverage=1 00:05:30.494 --rc genhtml_function_coverage=1 00:05:30.494 --rc genhtml_legend=1 00:05:30.494 --rc geninfo_all_blocks=1 00:05:30.494 --rc geninfo_unexecuted_blocks=1 00:05:30.494 00:05:30.494 ' 00:05:30.494 03:53:24 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:30.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.494 --rc genhtml_branch_coverage=1 00:05:30.494 --rc genhtml_function_coverage=1 00:05:30.494 --rc genhtml_legend=1 00:05:30.494 --rc geninfo_all_blocks=1 00:05:30.494 --rc geninfo_unexecuted_blocks=1 00:05:30.494 00:05:30.494 ' 00:05:30.494 03:53:24 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:30.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.494 --rc genhtml_branch_coverage=1 00:05:30.494 --rc genhtml_function_coverage=1 00:05:30.494 --rc genhtml_legend=1 00:05:30.494 --rc geninfo_all_blocks=1 00:05:30.494 --rc geninfo_unexecuted_blocks=1 00:05:30.494 00:05:30.494 ' 00:05:30.494 03:53:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:30.494 03:53:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2276050 00:05:30.494 03:53:24 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:30.494 03:53:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2276050 00:05:30.494 03:53:24 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2276050 ']' 00:05:30.494 03:53:24 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.494 03:53:24 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.494 03:53:24 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.494 03:53:24 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.494 03:53:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:30.752 [2024-12-10 03:53:24.880673] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:30.752 [2024-12-10 03:53:24.880768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2276050 ] 00:05:30.752 [2024-12-10 03:53:24.946564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.752 [2024-12-10 03:53:25.004587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.010 03:53:25 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.010 03:53:25 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:31.010 03:53:25 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:31.267 { 00:05:31.267 "version": "SPDK v25.01-pre git sha1 86d35c37a", 00:05:31.267 "fields": { 00:05:31.267 "major": 25, 00:05:31.267 "minor": 1, 00:05:31.267 "patch": 0, 00:05:31.267 "suffix": "-pre", 00:05:31.267 "commit": "86d35c37a" 00:05:31.267 } 00:05:31.267 } 00:05:31.267 03:53:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:31.267 03:53:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:31.267 03:53:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:31.267 03:53:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:31.267 03:53:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:31.267 03:53:25 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.267 03:53:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:31.267 03:53:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:31.267 03:53:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:31.267 03:53:25 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.267 03:53:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:31.267 03:53:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:31.267 03:53:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:31.267 03:53:25 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:31.267 03:53:25 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:31.267 03:53:25 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:31.267 03:53:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.268 03:53:25 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:31.268 03:53:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.268 03:53:25 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:31.268 03:53:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.268 03:53:25 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:31.268 03:53:25 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:31.268 03:53:25 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:31.525 request: 00:05:31.525 { 00:05:31.525 "method": "env_dpdk_get_mem_stats", 00:05:31.525 "req_id": 1 00:05:31.525 } 00:05:31.525 Got JSON-RPC error response 00:05:31.525 response: 00:05:31.525 { 00:05:31.525 "code": -32601, 00:05:31.525 "message": "Method not found" 00:05:31.525 } 00:05:31.525 03:53:25 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:31.525 03:53:25 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.525 03:53:25 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:31.525 03:53:25 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.525 03:53:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2276050 00:05:31.525 03:53:25 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2276050 ']' 00:05:31.525 03:53:25 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2276050 00:05:31.525 03:53:25 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:31.525 03:53:25 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.525 03:53:25 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2276050 00:05:31.525 03:53:25 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.525 03:53:25 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.525 03:53:25 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2276050' 00:05:31.525 killing process with pid 2276050 00:05:31.525 03:53:25 app_cmdline -- common/autotest_common.sh@973 -- # kill 2276050 00:05:31.525 03:53:25 app_cmdline -- common/autotest_common.sh@978 -- # wait 2276050 00:05:32.091 00:05:32.091 real 0m1.645s 00:05:32.091 user 0m2.057s 00:05:32.091 sys 0m0.470s 00:05:32.091 03:53:26 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.091 03:53:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:32.091 ************************************ 00:05:32.091 END TEST app_cmdline 00:05:32.091 ************************************ 00:05:32.091 03:53:26 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:32.091 03:53:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.091 03:53:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.091 03:53:26 -- common/autotest_common.sh@10 -- # set +x 00:05:32.091 ************************************ 00:05:32.091 START TEST version 00:05:32.091 ************************************ 00:05:32.091 03:53:26 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:32.091 * Looking for test storage... 00:05:32.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:32.091 03:53:26 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.091 03:53:26 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.091 03:53:26 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:32.350 03:53:26 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:32.350 03:53:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.350 03:53:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.350 03:53:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.350 03:53:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.350 03:53:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.350 03:53:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.350 03:53:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.350 03:53:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.350 03:53:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.350 03:53:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.350 03:53:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.350 03:53:26 version -- scripts/common.sh@344 -- # case "$op" in 00:05:32.350 03:53:26 version -- scripts/common.sh@345 -- # : 1 00:05:32.350 03:53:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.350 03:53:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.350 03:53:26 version -- scripts/common.sh@365 -- # decimal 1 00:05:32.350 03:53:26 version -- scripts/common.sh@353 -- # local d=1 00:05:32.350 03:53:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.350 03:53:26 version -- scripts/common.sh@355 -- # echo 1 00:05:32.350 03:53:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.350 03:53:26 version -- scripts/common.sh@366 -- # decimal 2 00:05:32.350 03:53:26 version -- scripts/common.sh@353 -- # local d=2 00:05:32.350 03:53:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.350 03:53:26 version -- scripts/common.sh@355 -- # echo 2 00:05:32.350 03:53:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.350 03:53:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.350 03:53:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.350 03:53:26 version -- scripts/common.sh@368 -- # return 0 00:05:32.350 03:53:26 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.350 03:53:26 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:32.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.350 --rc genhtml_branch_coverage=1 00:05:32.350 --rc genhtml_function_coverage=1 00:05:32.350 --rc genhtml_legend=1 00:05:32.350 --rc geninfo_all_blocks=1 00:05:32.350 --rc geninfo_unexecuted_blocks=1 00:05:32.350 00:05:32.350 ' 00:05:32.350 03:53:26 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:32.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.350 --rc genhtml_branch_coverage=1 00:05:32.350 --rc genhtml_function_coverage=1 00:05:32.350 --rc genhtml_legend=1 00:05:32.350 --rc geninfo_all_blocks=1 00:05:32.350 --rc geninfo_unexecuted_blocks=1 00:05:32.350 00:05:32.350 ' 00:05:32.350 03:53:26 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:32.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.350 --rc genhtml_branch_coverage=1 00:05:32.350 --rc genhtml_function_coverage=1 00:05:32.350 --rc genhtml_legend=1 00:05:32.350 --rc geninfo_all_blocks=1 00:05:32.350 --rc geninfo_unexecuted_blocks=1 00:05:32.350 00:05:32.350 ' 00:05:32.350 03:53:26 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:32.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.350 --rc genhtml_branch_coverage=1 00:05:32.350 --rc genhtml_function_coverage=1 00:05:32.350 --rc genhtml_legend=1 00:05:32.350 --rc geninfo_all_blocks=1 00:05:32.350 --rc geninfo_unexecuted_blocks=1 00:05:32.350 00:05:32.350 ' 00:05:32.350 03:53:26 version -- app/version.sh@17 -- # get_header_version major 00:05:32.350 03:53:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:32.350 03:53:26 version -- app/version.sh@14 -- # cut -f2 00:05:32.350 03:53:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:32.350 03:53:26 version -- app/version.sh@17 -- # major=25 00:05:32.350 03:53:26 version -- app/version.sh@18 -- # get_header_version minor 00:05:32.350 03:53:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:32.350 03:53:26 version -- app/version.sh@14 -- # cut -f2 00:05:32.350 03:53:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:32.350 03:53:26 version -- app/version.sh@18 -- # minor=1 00:05:32.350 03:53:26 version -- app/version.sh@19 -- # get_header_version patch 00:05:32.350 03:53:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:32.350 03:53:26 version -- app/version.sh@14 -- # cut -f2 00:05:32.350 03:53:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:32.350 03:53:26 version -- app/version.sh@19 -- # patch=0 00:05:32.350 03:53:26 version -- app/version.sh@20 -- # get_header_version suffix 00:05:32.350 03:53:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:32.350 03:53:26 version -- app/version.sh@14 -- # cut -f2 00:05:32.350 03:53:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:32.350 03:53:26 version -- app/version.sh@20 -- # suffix=-pre 00:05:32.350 03:53:26 version -- app/version.sh@22 -- # version=25.1 00:05:32.350 03:53:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:32.350 03:53:26 version -- app/version.sh@28 -- # version=25.1rc0 00:05:32.350 03:53:26 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:32.350 03:53:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:32.350 03:53:26 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:32.350 03:53:26 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:32.350 00:05:32.350 real 0m0.197s 00:05:32.350 user 0m0.124s 00:05:32.350 sys 0m0.100s 00:05:32.350 03:53:26 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.350 03:53:26 version -- common/autotest_common.sh@10 -- # set +x 00:05:32.350 ************************************ 00:05:32.350 END TEST version 00:05:32.350 ************************************ 00:05:32.350 03:53:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:32.350 03:53:26 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:32.350 03:53:26 -- spdk/autotest.sh@194 -- # uname -s 00:05:32.350 03:53:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:32.350 03:53:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:32.350 03:53:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:32.350 03:53:26 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:32.350 03:53:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:32.350 03:53:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:32.350 03:53:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.350 03:53:26 -- common/autotest_common.sh@10 -- # set +x 00:05:32.350 03:53:26 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:32.350 03:53:26 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:32.350 03:53:26 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:32.350 03:53:26 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:32.350 03:53:26 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:32.350 03:53:26 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:32.350 03:53:26 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:32.350 03:53:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:32.350 03:53:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.350 03:53:26 -- common/autotest_common.sh@10 -- # set +x 00:05:32.350 ************************************ 00:05:32.350 START TEST nvmf_tcp 00:05:32.350 ************************************ 00:05:32.350 03:53:26 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:32.350 * Looking for test storage... 00:05:32.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:32.350 03:53:26 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.350 03:53:26 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.350 03:53:26 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:32.609 03:53:26 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.609 03:53:26 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:32.609 03:53:26 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.609 03:53:26 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:32.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.609 --rc genhtml_branch_coverage=1 00:05:32.609 --rc genhtml_function_coverage=1 00:05:32.609 --rc genhtml_legend=1 00:05:32.609 --rc geninfo_all_blocks=1 00:05:32.609 --rc geninfo_unexecuted_blocks=1 00:05:32.609 00:05:32.609 ' 00:05:32.609 03:53:26 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:32.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.609 --rc genhtml_branch_coverage=1 00:05:32.609 --rc genhtml_function_coverage=1 00:05:32.609 --rc genhtml_legend=1 00:05:32.609 --rc geninfo_all_blocks=1 00:05:32.609 --rc geninfo_unexecuted_blocks=1 00:05:32.609 00:05:32.609 ' 00:05:32.609 03:53:26 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:32.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.609 --rc genhtml_branch_coverage=1 00:05:32.609 --rc genhtml_function_coverage=1 00:05:32.609 --rc genhtml_legend=1 00:05:32.609 --rc geninfo_all_blocks=1 00:05:32.609 --rc geninfo_unexecuted_blocks=1 00:05:32.609 00:05:32.609 ' 00:05:32.609 03:53:26 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:32.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.609 --rc genhtml_branch_coverage=1 00:05:32.609 --rc genhtml_function_coverage=1 00:05:32.609 --rc genhtml_legend=1 00:05:32.610 --rc geninfo_all_blocks=1 00:05:32.610 --rc geninfo_unexecuted_blocks=1 00:05:32.610 00:05:32.610 ' 00:05:32.610 03:53:26 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:32.610 03:53:26 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:32.610 03:53:26 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:32.610 03:53:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:32.610 03:53:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.610 03:53:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.610 ************************************ 00:05:32.610 START TEST nvmf_target_core 00:05:32.610 ************************************ 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:32.610 * Looking for test storage... 00:05:32.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:32.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.610 --rc genhtml_branch_coverage=1 00:05:32.610 --rc genhtml_function_coverage=1 00:05:32.610 --rc genhtml_legend=1 00:05:32.610 --rc geninfo_all_blocks=1 00:05:32.610 --rc geninfo_unexecuted_blocks=1 00:05:32.610 00:05:32.610 ' 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:32.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.610 --rc genhtml_branch_coverage=1 00:05:32.610 --rc genhtml_function_coverage=1 00:05:32.610 --rc genhtml_legend=1 00:05:32.610 --rc geninfo_all_blocks=1 00:05:32.610 --rc geninfo_unexecuted_blocks=1 00:05:32.610 00:05:32.610 ' 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:32.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.610 --rc genhtml_branch_coverage=1 00:05:32.610 --rc genhtml_function_coverage=1 00:05:32.610 --rc genhtml_legend=1 00:05:32.610 --rc geninfo_all_blocks=1 00:05:32.610 --rc geninfo_unexecuted_blocks=1 00:05:32.610 00:05:32.610 ' 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:32.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.610 --rc genhtml_branch_coverage=1 00:05:32.610 --rc genhtml_function_coverage=1 00:05:32.610 --rc genhtml_legend=1 00:05:32.610 --rc geninfo_all_blocks=1 00:05:32.610 --rc geninfo_unexecuted_blocks=1 00:05:32.610 00:05:32.610 ' 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:32.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:32.610 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:32.611 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:32.611 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:32.611 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:32.611 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:32.611 03:53:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:32.611 03:53:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:32.611 03:53:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.611 03:53:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:32.870 ************************************ 00:05:32.870 START TEST nvmf_abort 00:05:32.870 ************************************ 00:05:32.870 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:32.870 * Looking for test storage... 00:05:32.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:32.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.870 --rc genhtml_branch_coverage=1 00:05:32.870 --rc genhtml_function_coverage=1 00:05:32.870 --rc genhtml_legend=1 00:05:32.870 --rc geninfo_all_blocks=1 00:05:32.870 --rc geninfo_unexecuted_blocks=1 00:05:32.870 00:05:32.870 ' 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:32.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.870 --rc genhtml_branch_coverage=1 00:05:32.870 --rc genhtml_function_coverage=1 00:05:32.870 --rc genhtml_legend=1 00:05:32.870 --rc geninfo_all_blocks=1 00:05:32.870 --rc geninfo_unexecuted_blocks=1 00:05:32.870 00:05:32.870 ' 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:32.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.870 --rc genhtml_branch_coverage=1 00:05:32.870 --rc genhtml_function_coverage=1 00:05:32.870 --rc genhtml_legend=1 00:05:32.870 --rc geninfo_all_blocks=1 00:05:32.870 --rc geninfo_unexecuted_blocks=1 00:05:32.870 00:05:32.870 ' 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:32.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.870 --rc genhtml_branch_coverage=1 00:05:32.870 --rc genhtml_function_coverage=1 00:05:32.870 --rc genhtml_legend=1 00:05:32.870 --rc geninfo_all_blocks=1 00:05:32.870 --rc geninfo_unexecuted_blocks=1 00:05:32.870 00:05:32.870 ' 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.870 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:32.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:32.871 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.404 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:35.404 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:35.404 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:35.404 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:35.404 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:35.404 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:35.404 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:35.404 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:35.404 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:35.405 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:35.405 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:35.405 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:35.405 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:35.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:35.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:05:35.405 00:05:35.405 --- 10.0.0.2 ping statistics --- 00:05:35.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:35.405 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:35.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:35.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:05:35.405 00:05:35.405 --- 10.0.0.1 ping statistics --- 00:05:35.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:35.405 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.405 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2278133 00:05:35.406 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:35.406 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2278133 00:05:35.406 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2278133 ']' 00:05:35.406 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.406 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.406 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.406 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.406 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.406 [2024-12-10 03:53:29.582135] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:35.406 [2024-12-10 03:53:29.582224] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:35.406 [2024-12-10 03:53:29.653322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.406 [2024-12-10 03:53:29.710577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:35.406 [2024-12-10 03:53:29.710634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:35.406 [2024-12-10 03:53:29.710662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:35.406 [2024-12-10 03:53:29.710674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:35.406 [2024-12-10 03:53:29.710683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:35.406 [2024-12-10 03:53:29.712179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.406 [2024-12-10 03:53:29.712296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.406 [2024-12-10 03:53:29.712303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.664 [2024-12-10 03:53:29.857121] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.664 Malloc0 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.664 Delay0 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.664 [2024-12-10 03:53:29.929920] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:35.664 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.665 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:35.665 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.665 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.665 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.665 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:35.665 [2024-12-10 03:53:30.004388] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:38.193 Initializing NVMe Controllers 00:05:38.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:38.193 controller IO queue size 128 less than required 00:05:38.193 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:38.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:38.193 Initialization complete. Launching workers. 00:05:38.193 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28224 00:05:38.193 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28285, failed to submit 62 00:05:38.193 success 28228, unsuccessful 57, failed 0 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:38.193 rmmod nvme_tcp 00:05:38.193 rmmod nvme_fabrics 00:05:38.193 rmmod nvme_keyring 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2278133 ']' 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2278133 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2278133 ']' 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2278133 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2278133 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2278133' 00:05:38.193 killing process with pid 2278133 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2278133 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2278133 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:38.193 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:40.737 00:05:40.737 real 0m7.499s 00:05:40.737 user 0m10.476s 00:05:40.737 sys 0m2.735s 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:40.737 ************************************ 00:05:40.737 END TEST nvmf_abort 00:05:40.737 ************************************ 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:40.737 ************************************ 00:05:40.737 START TEST nvmf_ns_hotplug_stress 00:05:40.737 ************************************ 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:40.737 * Looking for test storage... 00:05:40.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.737 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.738 --rc genhtml_branch_coverage=1 00:05:40.738 --rc genhtml_function_coverage=1 00:05:40.738 --rc genhtml_legend=1 00:05:40.738 --rc geninfo_all_blocks=1 00:05:40.738 --rc geninfo_unexecuted_blocks=1 00:05:40.738 00:05:40.738 ' 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.738 --rc genhtml_branch_coverage=1 00:05:40.738 --rc genhtml_function_coverage=1 00:05:40.738 --rc genhtml_legend=1 00:05:40.738 --rc geninfo_all_blocks=1 00:05:40.738 --rc geninfo_unexecuted_blocks=1 00:05:40.738 00:05:40.738 ' 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.738 --rc genhtml_branch_coverage=1 00:05:40.738 --rc genhtml_function_coverage=1 00:05:40.738 --rc genhtml_legend=1 00:05:40.738 --rc geninfo_all_blocks=1 00:05:40.738 --rc geninfo_unexecuted_blocks=1 00:05:40.738 00:05:40.738 ' 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.738 --rc genhtml_branch_coverage=1 00:05:40.738 --rc genhtml_function_coverage=1 00:05:40.738 --rc genhtml_legend=1 00:05:40.738 --rc geninfo_all_blocks=1 00:05:40.738 --rc geninfo_unexecuted_blocks=1 00:05:40.738 00:05:40.738 ' 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:40.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:40.738 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:42.648 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:42.648 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:42.648 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:42.648 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:42.649 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:42.649 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:42.649 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:42.649 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:42.649 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:42.649 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:42.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:42.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:05:42.908 00:05:42.908 --- 10.0.0.2 ping statistics --- 00:05:42.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:42.908 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:42.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:42.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:05:42.908 00:05:42.908 --- 10.0.0.1 ping statistics --- 00:05:42.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:42.908 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2280510 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2280510 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2280510 ']' 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.908 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:42.908 [2024-12-10 03:53:37.139270] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:42.909 [2024-12-10 03:53:37.139342] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:42.909 [2024-12-10 03:53:37.209933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.909 [2024-12-10 03:53:37.263631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:42.909 [2024-12-10 03:53:37.263685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:42.909 [2024-12-10 03:53:37.263699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:42.909 [2024-12-10 03:53:37.263711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:42.909 [2024-12-10 03:53:37.263722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:42.909 [2024-12-10 03:53:37.265293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.909 [2024-12-10 03:53:37.265361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.909 [2024-12-10 03:53:37.265357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.167 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.167 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:43.167 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:43.167 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:43.167 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:43.167 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:43.167 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:43.167 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:43.426 [2024-12-10 03:53:37.650808] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:43.426 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:43.683 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:43.941 [2024-12-10 03:53:38.201705] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:43.941 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:44.199 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:44.457 Malloc0 00:05:44.457 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:44.715 Delay0 00:05:44.715 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.973 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:45.230 NULL1 00:05:45.230 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:45.488 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2280812 00:05:45.488 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:45.488 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:05:45.488 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.860 Read completed with error (sct=0, sc=11) 00:05:46.860 03:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.118 03:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:47.118 03:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:47.376 true 00:05:47.376 03:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:05:47.376 03:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.308 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.308 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:48.308 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:48.566 true 00:05:48.566 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:05:48.566 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.823 03:53:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.389 03:53:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:49.389 03:53:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:49.389 true 00:05:49.389 03:53:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:05:49.389 03:53:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.647 03:53:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.905 03:53:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:49.905 03:53:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:50.162 true 00:05:50.420 03:53:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:05:50.420 03:53:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.350 03:53:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.608 03:53:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:51.608 03:53:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:51.865 true 00:05:51.865 03:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:05:51.865 03:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.123 03:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.380 03:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:52.380 03:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:52.638 true 00:05:52.638 03:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:05:52.638 03:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.572 03:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.572 03:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:53.572 03:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:53.829 true 00:05:53.829 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:05:53.829 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.087 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.345 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:54.345 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:54.602 true 00:05:54.860 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:05:54.860 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.117 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.375 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:55.375 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:55.674 true 00:05:55.674 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:05:55.674 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.673 03:53:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.930 03:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:56.930 03:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:57.188 true 00:05:57.188 03:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:05:57.188 03:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.445 03:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.702 03:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:57.702 03:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:57.959 true 00:05:57.959 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:05:57.959 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.217 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.474 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:58.474 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:58.732 true 00:05:58.732 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:05:58.732 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.664 03:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.922 03:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:59.922 03:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:00.179 true 00:06:00.179 03:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:06:00.179 03:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.436 03:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.694 03:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:00.694 03:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:00.950 true 00:06:00.950 03:53:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:06:00.950 03:53:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.883 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.140 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:02.140 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:02.398 true 00:06:02.398 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:06:02.398 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.655 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.912 03:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:02.912 03:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:03.169 true 00:06:03.169 03:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:06:03.169 03:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.102 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.359 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:04.359 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:04.619 true 00:06:04.619 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:06:04.619 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.878 03:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.136 03:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:05.136 03:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:05.393 true 00:06:05.393 03:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:06:05.393 03:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.326 03:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.584 03:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:06.584 03:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:06.885 true 00:06:06.885 03:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:06:06.885 03:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.142 03:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.398 03:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:07.398 03:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:07.655 true 00:06:07.655 03:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:06:07.655 03:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.913 03:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.171 03:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:08.171 03:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:08.428 true 00:06:08.428 03:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:06:08.429 03:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.361 03:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.618 03:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:09.618 03:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:09.875 true 00:06:09.875 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:06:09.875 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.133 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.391 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:10.391 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:10.648 true 00:06:10.648 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:06:10.648 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.908 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.169 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:11.169 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:11.426 true 00:06:11.426 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:06:11.426 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.358 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.873 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:12.873 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:13.131 true 00:06:13.131 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:06:13.131 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.388 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.646 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:13.646 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:13.903 true 00:06:13.903 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:06:13.903 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.160 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.418 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:14.418 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:14.675 true 00:06:14.675 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:06:14.675 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.608 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.866 Initializing NVMe Controllers 00:06:15.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:15.866 Controller IO queue size 128, less than required. 00:06:15.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:15.866 Controller IO queue size 128, less than required. 00:06:15.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:15.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:15.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:15.866 Initialization complete. Launching workers. 00:06:15.866 ======================================================== 00:06:15.866 Latency(us) 00:06:15.866 Device Information : IOPS MiB/s Average min max 00:06:15.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 690.37 0.34 83363.41 3375.36 1012108.88 00:06:15.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9365.36 4.57 13667.91 3311.76 541229.51 00:06:15.866 ======================================================== 00:06:15.866 Total : 10055.73 4.91 18452.79 3311.76 1012108.88 00:06:15.866 00:06:15.866 03:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:15.866 03:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:16.123 true 00:06:16.123 03:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2280812 00:06:16.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2280812) - No such process 00:06:16.123 03:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2280812 00:06:16.123 03:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.381 03:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.639 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:16.639 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:16.639 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:16.639 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.639 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:16.896 null0 00:06:17.155 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.155 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.155 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:17.412 null1 00:06:17.412 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.412 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.412 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:17.669 null2 00:06:17.669 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.669 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.669 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:17.926 null3 00:06:17.926 03:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.926 03:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.926 03:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:18.183 null4 00:06:18.183 03:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:18.183 03:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:18.183 03:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:18.441 null5 00:06:18.441 03:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:18.441 03:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:18.441 03:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:18.698 null6 00:06:18.698 03:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:18.698 03:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:18.698 03:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:18.957 null7 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.957 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.958 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:18.958 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.958 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.958 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.958 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.958 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:18.958 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.958 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:18.958 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.958 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.958 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.958 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2285493 2285494 2285496 2285498 2285500 2285502 2285504 2285506 00:06:18.958 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.216 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.216 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.216 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.216 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.216 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.216 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.216 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.216 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.474 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.474 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.474 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.474 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.474 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.474 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.474 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.475 03:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.733 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.733 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.733 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.733 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.733 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.733 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.733 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.733 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.991 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.557 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.557 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.557 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.557 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.557 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.557 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.557 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.557 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.816 03:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.074 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.074 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.074 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.074 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.074 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.074 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.074 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.074 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.332 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.590 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.590 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.590 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.590 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.590 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.590 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.590 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.590 03:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.848 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.106 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.106 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.106 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.106 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.106 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.106 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.106 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.364 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.621 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.621 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.621 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.621 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.621 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.621 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.621 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.621 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.621 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:22.622 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.622 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.622 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.622 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.622 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.622 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:22.622 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.622 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.622 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:22.622 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.622 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.622 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.622 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.622 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.622 03:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.880 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.880 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.880 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.880 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.880 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.880 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.880 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.880 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.138 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:23.396 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.397 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.397 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.397 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:23.397 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.397 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.397 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.397 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.655 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.912 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.912 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.912 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:23.912 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.912 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.912 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.912 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.912 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:24.179 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.179 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.179 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:24.179 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.179 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.179 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:24.492 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:24.751 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:24.751 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:24.751 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.751 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:24.751 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:24.751 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.751 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.751 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.751 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:25.009 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:25.010 rmmod nvme_tcp 00:06:25.010 rmmod nvme_fabrics 00:06:25.010 rmmod nvme_keyring 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2280510 ']' 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2280510 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2280510 ']' 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2280510 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2280510 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2280510' 00:06:25.010 killing process with pid 2280510 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2280510 00:06:25.010 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2280510 00:06:25.269 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:25.269 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:25.269 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:25.269 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:25.269 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:25.269 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:25.269 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:25.269 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:25.269 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:25.269 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.269 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.269 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.814 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:27.814 00:06:27.814 real 0m47.026s 00:06:27.814 user 3m38.788s 00:06:27.814 sys 0m15.777s 00:06:27.814 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.814 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:27.814 ************************************ 00:06:27.814 END TEST nvmf_ns_hotplug_stress 00:06:27.814 ************************************ 00:06:27.814 03:54:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:27.814 03:54:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:27.814 03:54:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.814 03:54:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:27.814 ************************************ 00:06:27.814 START TEST nvmf_delete_subsystem 00:06:27.814 ************************************ 00:06:27.814 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:27.814 * Looking for test storage... 00:06:27.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.814 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:27.814 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:27.814 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:27.814 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:27.814 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.814 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:27.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.815 --rc genhtml_branch_coverage=1 00:06:27.815 --rc genhtml_function_coverage=1 00:06:27.815 --rc genhtml_legend=1 00:06:27.815 --rc geninfo_all_blocks=1 00:06:27.815 --rc geninfo_unexecuted_blocks=1 00:06:27.815 00:06:27.815 ' 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:27.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.815 --rc genhtml_branch_coverage=1 00:06:27.815 --rc genhtml_function_coverage=1 00:06:27.815 --rc genhtml_legend=1 00:06:27.815 --rc geninfo_all_blocks=1 00:06:27.815 --rc geninfo_unexecuted_blocks=1 00:06:27.815 00:06:27.815 ' 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:27.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.815 --rc genhtml_branch_coverage=1 00:06:27.815 --rc genhtml_function_coverage=1 00:06:27.815 --rc genhtml_legend=1 00:06:27.815 --rc geninfo_all_blocks=1 00:06:27.815 --rc geninfo_unexecuted_blocks=1 00:06:27.815 00:06:27.815 ' 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:27.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.815 --rc genhtml_branch_coverage=1 00:06:27.815 --rc genhtml_function_coverage=1 00:06:27.815 --rc genhtml_legend=1 00:06:27.815 --rc geninfo_all_blocks=1 00:06:27.815 --rc geninfo_unexecuted_blocks=1 00:06:27.815 00:06:27.815 ' 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:27.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.815 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.816 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:27.816 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:27.816 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:27.816 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:29.725 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:29.725 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:29.725 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:29.725 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:29.725 03:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:29.725 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:29.725 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:29.725 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:29.725 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:29.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:29.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:06:29.725 00:06:29.725 --- 10.0.0.2 ping statistics --- 00:06:29.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.725 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:06:29.725 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:29.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:29.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:06:29.726 00:06:29.726 --- 10.0.0.1 ping statistics --- 00:06:29.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.726 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2288403 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2288403 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2288403 ']' 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.726 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.984 [2024-12-10 03:54:24.109284] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:29.984 [2024-12-10 03:54:24.109381] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.984 [2024-12-10 03:54:24.182820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.984 [2024-12-10 03:54:24.241896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:29.984 [2024-12-10 03:54:24.241951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:29.984 [2024-12-10 03:54:24.241979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:29.984 [2024-12-10 03:54:24.241991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:29.984 [2024-12-10 03:54:24.242001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:29.984 [2024-12-10 03:54:24.243553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.984 [2024-12-10 03:54:24.243571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.984 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.984 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:29.984 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:29.984 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.984 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.243 [2024-12-10 03:54:24.386462] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.243 [2024-12-10 03:54:24.402697] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.243 NULL1 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.243 Delay0 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2288434 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:30.243 03:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:30.243 [2024-12-10 03:54:24.487541] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:32.142 03:54:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:32.142 03:54:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.142 03:54:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 [2024-12-10 03:54:26.608802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228860 is same with the state(6) to be set 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 starting I/O failed: -6 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 [2024-12-10 03:54:26.609341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7bfc000c40 is same with the state(6) to be set 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 [2024-12-10 03:54:26.609828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22284a0 is same with the state(6) to be set 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Write completed with error (sct=0, sc=8) 00:06:32.401 Read completed with error (sct=0, sc=8) 00:06:33.335 [2024-12-10 03:54:27.582941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22299b0 is same with the state(6) to be set 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 [2024-12-10 03:54:27.610395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7bfc00d020 is same with the state(6) to be set 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 [2024-12-10 03:54:27.610578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7bfc00d7e0 is same with the state(6) to be set 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 [2024-12-10 03:54:27.612761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22282c0 is same with the state(6) to be set 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Write completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 Read completed with error (sct=0, sc=8) 00:06:33.335 [2024-12-10 03:54:27.613359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228680 is same with the state(6) to be set 00:06:33.335 Initializing NVMe Controllers 00:06:33.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:33.335 Controller IO queue size 128, less than required. 00:06:33.335 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:33.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:33.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:33.335 Initialization complete. Launching workers. 00:06:33.335 ======================================================== 00:06:33.335 Latency(us) 00:06:33.335 Device Information : IOPS MiB/s Average min max 00:06:33.335 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 158.89 0.08 1009304.71 1028.40 2005032.83 00:06:33.335 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.85 0.08 926630.91 559.21 2001387.43 00:06:33.335 ======================================================== 00:06:33.335 Total : 322.74 0.16 967331.86 559.21 2005032.83 00:06:33.335 00:06:33.335 [2024-12-10 03:54:27.613811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22299b0 (9): Bad file descriptor 00:06:33.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:33.335 03:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.335 03:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:33.335 03:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2288434 00:06:33.335 03:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2288434 00:06:33.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2288434) - No such process 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2288434 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2288434 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2288434 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.901 [2024-12-10 03:54:28.137379] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2288951 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2288951 00:06:33.901 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:33.901 [2024-12-10 03:54:28.209273] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:34.467 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:34.467 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2288951 00:06:34.467 03:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.032 03:54:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.032 03:54:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2288951 00:06:35.032 03:54:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.290 03:54:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.290 03:54:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2288951 00:06:35.290 03:54:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.855 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.856 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2288951 00:06:35.856 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:36.421 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:36.421 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2288951 00:06:36.421 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:36.986 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:36.986 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2288951 00:06:36.986 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:37.244 Initializing NVMe Controllers 00:06:37.244 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:37.244 Controller IO queue size 128, less than required. 00:06:37.244 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:37.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:37.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:37.244 Initialization complete. Launching workers. 00:06:37.244 ======================================================== 00:06:37.244 Latency(us) 00:06:37.244 Device Information : IOPS MiB/s Average min max 00:06:37.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004333.11 1000179.16 1042438.41 00:06:37.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005319.42 1000227.21 1042046.67 00:06:37.244 ======================================================== 00:06:37.244 Total : 256.00 0.12 1004826.27 1000179.16 1042438.41 00:06:37.244 00:06:37.501 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:37.501 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2288951 00:06:37.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2288951) - No such process 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2288951 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:37.502 rmmod nvme_tcp 00:06:37.502 rmmod nvme_fabrics 00:06:37.502 rmmod nvme_keyring 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2288403 ']' 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2288403 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2288403 ']' 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2288403 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2288403 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2288403' 00:06:37.502 killing process with pid 2288403 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2288403 00:06:37.502 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2288403 00:06:37.762 03:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:37.762 03:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:37.762 03:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:37.762 03:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:37.762 03:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:37.762 03:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:37.762 03:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:37.762 03:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:37.762 03:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:37.762 03:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.762 03:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.762 03:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:40.296 00:06:40.296 real 0m12.430s 00:06:40.296 user 0m27.896s 00:06:40.296 sys 0m3.058s 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.296 ************************************ 00:06:40.296 END TEST nvmf_delete_subsystem 00:06:40.296 ************************************ 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:40.296 ************************************ 00:06:40.296 START TEST nvmf_host_management 00:06:40.296 ************************************ 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:40.296 * Looking for test storage... 00:06:40.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.296 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:40.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.297 --rc genhtml_branch_coverage=1 00:06:40.297 --rc genhtml_function_coverage=1 00:06:40.297 --rc genhtml_legend=1 00:06:40.297 --rc geninfo_all_blocks=1 00:06:40.297 --rc geninfo_unexecuted_blocks=1 00:06:40.297 00:06:40.297 ' 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:40.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.297 --rc genhtml_branch_coverage=1 00:06:40.297 --rc genhtml_function_coverage=1 00:06:40.297 --rc genhtml_legend=1 00:06:40.297 --rc geninfo_all_blocks=1 00:06:40.297 --rc geninfo_unexecuted_blocks=1 00:06:40.297 00:06:40.297 ' 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:40.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.297 --rc genhtml_branch_coverage=1 00:06:40.297 --rc genhtml_function_coverage=1 00:06:40.297 --rc genhtml_legend=1 00:06:40.297 --rc geninfo_all_blocks=1 00:06:40.297 --rc geninfo_unexecuted_blocks=1 00:06:40.297 00:06:40.297 ' 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:40.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.297 --rc genhtml_branch_coverage=1 00:06:40.297 --rc genhtml_function_coverage=1 00:06:40.297 --rc genhtml_legend=1 00:06:40.297 --rc geninfo_all_blocks=1 00:06:40.297 --rc geninfo_unexecuted_blocks=1 00:06:40.297 00:06:40.297 ' 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:40.297 03:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.201 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:42.202 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:42.202 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:42.202 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:42.202 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:42.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:06:42.202 00:06:42.202 --- 10.0.0.2 ping statistics --- 00:06:42.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.202 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:06:42.202 00:06:42.202 --- 10.0.0.1 ping statistics --- 00:06:42.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.202 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2291312 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2291312 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2291312 ']' 00:06:42.202 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.203 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.203 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.203 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.203 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.461 [2024-12-10 03:54:36.601960] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:42.461 [2024-12-10 03:54:36.602057] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.461 [2024-12-10 03:54:36.678420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.461 [2024-12-10 03:54:36.739934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.461 [2024-12-10 03:54:36.739994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.461 [2024-12-10 03:54:36.740021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.461 [2024-12-10 03:54:36.740033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.461 [2024-12-10 03:54:36.740043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.461 [2024-12-10 03:54:36.741735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.461 [2024-12-10 03:54:36.741801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.461 [2024-12-10 03:54:36.741868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:42.461 [2024-12-10 03:54:36.741872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.720 [2024-12-10 03:54:36.897329] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.720 Malloc0 00:06:42.720 [2024-12-10 03:54:36.977995] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:42.720 03:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2291365 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2291365 /var/tmp/bdevperf.sock 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2291365 ']' 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:42.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:42.720 { 00:06:42.720 "params": { 00:06:42.720 "name": "Nvme$subsystem", 00:06:42.720 "trtype": "$TEST_TRANSPORT", 00:06:42.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:42.720 "adrfam": "ipv4", 00:06:42.720 "trsvcid": "$NVMF_PORT", 00:06:42.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:42.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:42.720 "hdgst": ${hdgst:-false}, 00:06:42.720 "ddgst": ${ddgst:-false} 00:06:42.720 }, 00:06:42.720 "method": "bdev_nvme_attach_controller" 00:06:42.720 } 00:06:42.720 EOF 00:06:42.720 )") 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:42.720 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:42.720 "params": { 00:06:42.720 "name": "Nvme0", 00:06:42.720 "trtype": "tcp", 00:06:42.720 "traddr": "10.0.0.2", 00:06:42.720 "adrfam": "ipv4", 00:06:42.720 "trsvcid": "4420", 00:06:42.720 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:42.720 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:42.720 "hdgst": false, 00:06:42.720 "ddgst": false 00:06:42.720 }, 00:06:42.720 "method": "bdev_nvme_attach_controller" 00:06:42.720 }' 00:06:42.720 [2024-12-10 03:54:37.052783] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:42.720 [2024-12-10 03:54:37.052880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291365 ] 00:06:42.979 [2024-12-10 03:54:37.126750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.979 [2024-12-10 03:54:37.187167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.237 Running I/O for 10 seconds... 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:43.237 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:43.497 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:43.497 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:43.497 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:43.497 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:43.498 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.498 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.498 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.498 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:06:43.498 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:06:43.498 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:43.498 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:43.498 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:43.498 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:43.498 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.498 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.498 [2024-12-10 03:54:37.772639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.772998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a05b0 is same with the state(6) to be set 00:06:43.498 [2024-12-10 03:54:37.773430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.498 [2024-12-10 03:54:37.773473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.498 [2024-12-10 03:54:37.773509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.498 [2024-12-10 03:54:37.773526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.498 [2024-12-10 03:54:37.773543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.498 [2024-12-10 03:54:37.773568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.498 [2024-12-10 03:54:37.773584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.498 [2024-12-10 03:54:37.773607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.498 [2024-12-10 03:54:37.773622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.498 [2024-12-10 03:54:37.773636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.498 [2024-12-10 03:54:37.773651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.498 [2024-12-10 03:54:37.773665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.498 [2024-12-10 03:54:37.773679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.498 [2024-12-10 03:54:37.773692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.498 [2024-12-10 03:54:37.773707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.498 [2024-12-10 03:54:37.773720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.498 [2024-12-10 03:54:37.773735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.498 [2024-12-10 03:54:37.773748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.498 [2024-12-10 03:54:37.773763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.498 [2024-12-10 03:54:37.773777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.498 [2024-12-10 03:54:37.773791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.498 [2024-12-10 03:54:37.773805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.498 [2024-12-10 03:54:37.773820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.498 [2024-12-10 03:54:37.773833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.498 [2024-12-10 03:54:37.773854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.498 [2024-12-10 03:54:37.773867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.498 [2024-12-10 03:54:37.773882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.498 [2024-12-10 03:54:37.773899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.498 [2024-12-10 03:54:37.773923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.498 [2024-12-10 03:54:37.773937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.498 [2024-12-10 03:54:37.773951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.498 [2024-12-10 03:54:37.773965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.773980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.773993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.774976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.774989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.775008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.775022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.775037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.775050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.775065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.775079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.775094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.499 [2024-12-10 03:54:37.775107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.499 [2024-12-10 03:54:37.775122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.500 [2024-12-10 03:54:37.775135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.500 [2024-12-10 03:54:37.775150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.500 [2024-12-10 03:54:37.775163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.500 [2024-12-10 03:54:37.775178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.500 [2024-12-10 03:54:37.775192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.500 [2024-12-10 03:54:37.775206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.500 [2024-12-10 03:54:37.775220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.500 [2024-12-10 03:54:37.775235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.500 [2024-12-10 03:54:37.775248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.500 [2024-12-10 03:54:37.775263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.500 [2024-12-10 03:54:37.775276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.500 [2024-12-10 03:54:37.775291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.500 [2024-12-10 03:54:37.775304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.500 [2024-12-10 03:54:37.775318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.500 [2024-12-10 03:54:37.775331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.500 [2024-12-10 03:54:37.775346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.500 [2024-12-10 03:54:37.775363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.500 [2024-12-10 03:54:37.775402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:06:43.500 [2024-12-10 03:54:37.776605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:43.500 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.500 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:43.500 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.500 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.500 task offset: 81920 on job bdev=Nvme0n1 fails 00:06:43.500 00:06:43.500 Latency(us) 00:06:43.500 [2024-12-10T02:54:37.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:43.500 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:43.500 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:43.500 Verification LBA range: start 0x0 length 0x400 00:06:43.500 Nvme0n1 : 0.40 1597.61 99.85 159.76 0.00 35362.70 2706.39 34952.53 00:06:43.500 [2024-12-10T02:54:37.889Z] =================================================================================================================== 00:06:43.500 [2024-12-10T02:54:37.889Z] Total : 1597.61 99.85 159.76 0.00 35362.70 2706.39 34952.53 00:06:43.500 [2024-12-10 03:54:37.778505] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.500 [2024-12-10 03:54:37.778536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb3660 (9): Bad file descriptor 00:06:43.500 [2024-12-10 03:54:37.783424] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:43.500 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.500 03:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:44.433 03:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2291365 00:06:44.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2291365) - No such process 00:06:44.433 03:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:44.433 03:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:44.433 03:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:44.433 03:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:44.433 03:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:44.433 03:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:44.433 03:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:44.433 03:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:44.433 { 00:06:44.433 "params": { 00:06:44.433 "name": "Nvme$subsystem", 00:06:44.433 "trtype": "$TEST_TRANSPORT", 00:06:44.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:44.433 "adrfam": "ipv4", 00:06:44.433 "trsvcid": "$NVMF_PORT", 00:06:44.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:44.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:44.433 "hdgst": ${hdgst:-false}, 00:06:44.433 "ddgst": ${ddgst:-false} 00:06:44.433 }, 00:06:44.433 "method": "bdev_nvme_attach_controller" 00:06:44.433 } 00:06:44.433 EOF 00:06:44.433 )") 00:06:44.433 03:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:44.433 03:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:44.433 03:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:44.433 03:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:44.433 "params": { 00:06:44.433 "name": "Nvme0", 00:06:44.433 "trtype": "tcp", 00:06:44.433 "traddr": "10.0.0.2", 00:06:44.433 "adrfam": "ipv4", 00:06:44.433 "trsvcid": "4420", 00:06:44.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:44.433 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:44.433 "hdgst": false, 00:06:44.433 "ddgst": false 00:06:44.433 }, 00:06:44.433 "method": "bdev_nvme_attach_controller" 00:06:44.433 }' 00:06:44.692 [2024-12-10 03:54:38.834614] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:44.692 [2024-12-10 03:54:38.834697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291640 ] 00:06:44.692 [2024-12-10 03:54:38.903938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.692 [2024-12-10 03:54:38.964765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.950 Running I/O for 1 seconds... 00:06:45.883 1646.00 IOPS, 102.88 MiB/s 00:06:45.883 Latency(us) 00:06:45.883 [2024-12-10T02:54:40.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:45.883 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:45.883 Verification LBA range: start 0x0 length 0x400 00:06:45.883 Nvme0n1 : 1.03 1680.38 105.02 0.00 0.00 37473.86 5412.79 33399.09 00:06:45.883 [2024-12-10T02:54:40.272Z] =================================================================================================================== 00:06:45.883 [2024-12-10T02:54:40.272Z] Total : 1680.38 105.02 0.00 0.00 37473.86 5412.79 33399.09 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:46.141 rmmod nvme_tcp 00:06:46.141 rmmod nvme_fabrics 00:06:46.141 rmmod nvme_keyring 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2291312 ']' 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2291312 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2291312 ']' 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2291312 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.141 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2291312 00:06:46.399 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:46.399 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:46.399 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2291312' 00:06:46.399 killing process with pid 2291312 00:06:46.399 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2291312 00:06:46.399 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2291312 00:06:46.399 [2024-12-10 03:54:40.755962] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:46.658 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:46.658 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:46.658 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:46.658 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:46.658 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:46.658 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:46.658 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:46.658 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:46.658 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:46.658 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.658 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.658 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.563 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:48.563 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:48.563 00:06:48.563 real 0m8.724s 00:06:48.563 user 0m18.997s 00:06:48.563 sys 0m2.809s 00:06:48.563 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.563 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.563 ************************************ 00:06:48.563 END TEST nvmf_host_management 00:06:48.563 ************************************ 00:06:48.563 03:54:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:48.563 03:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:48.563 03:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.563 03:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:48.563 ************************************ 00:06:48.563 START TEST nvmf_lvol 00:06:48.564 ************************************ 00:06:48.564 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:48.564 * Looking for test storage... 00:06:48.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:48.564 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:48.564 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:48.564 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:48.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.823 --rc genhtml_branch_coverage=1 00:06:48.823 --rc genhtml_function_coverage=1 00:06:48.823 --rc genhtml_legend=1 00:06:48.823 --rc geninfo_all_blocks=1 00:06:48.823 --rc geninfo_unexecuted_blocks=1 00:06:48.823 00:06:48.823 ' 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:48.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.823 --rc genhtml_branch_coverage=1 00:06:48.823 --rc genhtml_function_coverage=1 00:06:48.823 --rc genhtml_legend=1 00:06:48.823 --rc geninfo_all_blocks=1 00:06:48.823 --rc geninfo_unexecuted_blocks=1 00:06:48.823 00:06:48.823 ' 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:48.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.823 --rc genhtml_branch_coverage=1 00:06:48.823 --rc genhtml_function_coverage=1 00:06:48.823 --rc genhtml_legend=1 00:06:48.823 --rc geninfo_all_blocks=1 00:06:48.823 --rc geninfo_unexecuted_blocks=1 00:06:48.823 00:06:48.823 ' 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:48.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.823 --rc genhtml_branch_coverage=1 00:06:48.823 --rc genhtml_function_coverage=1 00:06:48.823 --rc genhtml_legend=1 00:06:48.823 --rc geninfo_all_blocks=1 00:06:48.823 --rc geninfo_unexecuted_blocks=1 00:06:48.823 00:06:48.823 ' 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:48.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:48.823 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:48.824 03:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:51.360 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.360 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:51.360 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:51.360 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:51.361 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:51.361 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:51.361 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:51.361 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:51.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:06:51.361 00:06:51.361 --- 10.0.0.2 ping statistics --- 00:06:51.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.361 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:06:51.361 00:06:51.361 --- 10.0.0.1 ping statistics --- 00:06:51.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.361 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:51.361 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2293847 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2293847 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2293847 ']' 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:51.362 [2024-12-10 03:54:45.444894] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:51.362 [2024-12-10 03:54:45.444993] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.362 [2024-12-10 03:54:45.518031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.362 [2024-12-10 03:54:45.572656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.362 [2024-12-10 03:54:45.572725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.362 [2024-12-10 03:54:45.572739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.362 [2024-12-10 03:54:45.572750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.362 [2024-12-10 03:54:45.572760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.362 [2024-12-10 03:54:45.574152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.362 [2024-12-10 03:54:45.574213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.362 [2024-12-10 03:54:45.574216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.362 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:51.619 [2024-12-10 03:54:45.954294] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.619 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:52.185 03:54:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:52.185 03:54:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:52.443 03:54:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:52.443 03:54:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:52.702 03:54:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:52.960 03:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=86f11927-9b1d-4ee7-8936-641982128c11 00:06:52.960 03:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 86f11927-9b1d-4ee7-8936-641982128c11 lvol 20 00:06:53.217 03:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2d5f619b-b705-4055-98dc-0308e369197e 00:06:53.217 03:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:53.475 03:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2d5f619b-b705-4055-98dc-0308e369197e 00:06:53.733 03:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:53.991 [2024-12-10 03:54:48.211493] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.991 03:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:54.248 03:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2294273 00:06:54.248 03:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:54.248 03:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:55.181 03:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2d5f619b-b705-4055-98dc-0308e369197e MY_SNAPSHOT 00:06:55.748 03:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4a24c177-9a94-4b2e-b25a-356dee459306 00:06:55.749 03:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2d5f619b-b705-4055-98dc-0308e369197e 30 00:06:56.006 03:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4a24c177-9a94-4b2e-b25a-356dee459306 MY_CLONE 00:06:56.265 03:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e655de75-000f-498e-90d8-cb71584e29dd 00:06:56.265 03:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e655de75-000f-498e-90d8-cb71584e29dd 00:06:56.831 03:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2294273 00:07:05.004 Initializing NVMe Controllers 00:07:05.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:05.004 Controller IO queue size 128, less than required. 00:07:05.004 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:05.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:05.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:05.004 Initialization complete. Launching workers. 00:07:05.004 ======================================================== 00:07:05.004 Latency(us) 00:07:05.004 Device Information : IOPS MiB/s Average min max 00:07:05.004 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10443.60 40.80 12266.69 1493.07 85412.82 00:07:05.004 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10324.40 40.33 12406.06 2115.33 65769.57 00:07:05.004 ======================================================== 00:07:05.004 Total : 20768.00 81.12 12335.97 1493.07 85412.82 00:07:05.004 00:07:05.004 03:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:05.004 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2d5f619b-b705-4055-98dc-0308e369197e 00:07:05.262 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 86f11927-9b1d-4ee7-8936-641982128c11 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:05.520 rmmod nvme_tcp 00:07:05.520 rmmod nvme_fabrics 00:07:05.520 rmmod nvme_keyring 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2293847 ']' 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2293847 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2293847 ']' 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2293847 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.520 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2293847 00:07:05.778 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.778 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.778 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2293847' 00:07:05.778 killing process with pid 2293847 00:07:05.778 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2293847 00:07:05.778 03:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2293847 00:07:06.037 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:06.037 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:06.037 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:06.037 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:06.037 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:06.037 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:06.037 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:06.037 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:06.037 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:06.037 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.037 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:06.037 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.945 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:07.945 00:07:07.945 real 0m19.374s 00:07:07.945 user 1m5.014s 00:07:07.945 sys 0m5.924s 00:07:07.945 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.945 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:07.945 ************************************ 00:07:07.945 END TEST nvmf_lvol 00:07:07.945 ************************************ 00:07:07.945 03:55:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:07.945 03:55:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:07.945 03:55:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.945 03:55:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.945 ************************************ 00:07:07.945 START TEST nvmf_lvs_grow 00:07:07.945 ************************************ 00:07:07.945 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:08.204 * Looking for test storage... 00:07:08.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:08.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.204 --rc genhtml_branch_coverage=1 00:07:08.204 --rc genhtml_function_coverage=1 00:07:08.204 --rc genhtml_legend=1 00:07:08.204 --rc geninfo_all_blocks=1 00:07:08.204 --rc geninfo_unexecuted_blocks=1 00:07:08.204 00:07:08.204 ' 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:08.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.204 --rc genhtml_branch_coverage=1 00:07:08.204 --rc genhtml_function_coverage=1 00:07:08.204 --rc genhtml_legend=1 00:07:08.204 --rc geninfo_all_blocks=1 00:07:08.204 --rc geninfo_unexecuted_blocks=1 00:07:08.204 00:07:08.204 ' 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:08.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.204 --rc genhtml_branch_coverage=1 00:07:08.204 --rc genhtml_function_coverage=1 00:07:08.204 --rc genhtml_legend=1 00:07:08.204 --rc geninfo_all_blocks=1 00:07:08.204 --rc geninfo_unexecuted_blocks=1 00:07:08.204 00:07:08.204 ' 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:08.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.204 --rc genhtml_branch_coverage=1 00:07:08.204 --rc genhtml_function_coverage=1 00:07:08.204 --rc genhtml_legend=1 00:07:08.204 --rc geninfo_all_blocks=1 00:07:08.204 --rc geninfo_unexecuted_blocks=1 00:07:08.204 00:07:08.204 ' 00:07:08.204 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:08.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:08.205 03:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:10.739 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:10.739 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:10.739 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:10.739 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:10.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:07:10.739 00:07:10.739 --- 10.0.0.2 ping statistics --- 00:07:10.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.739 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:07:10.739 00:07:10.739 --- 10.0.0.1 ping statistics --- 00:07:10.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.739 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:10.739 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2297564 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2297564 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2297564 ']' 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.740 03:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.740 [2024-12-10 03:55:04.789464] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:10.740 [2024-12-10 03:55:04.789564] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.740 [2024-12-10 03:55:04.861306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.740 [2024-12-10 03:55:04.919616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.740 [2024-12-10 03:55:04.919677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.740 [2024-12-10 03:55:04.919706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.740 [2024-12-10 03:55:04.919717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.740 [2024-12-10 03:55:04.919727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.740 [2024-12-10 03:55:04.920391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.740 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.740 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:10.740 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:10.740 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.740 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.740 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.740 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:10.997 [2024-12-10 03:55:05.316233] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.997 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:10.997 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.997 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.997 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.997 ************************************ 00:07:10.997 START TEST lvs_grow_clean 00:07:10.997 ************************************ 00:07:10.997 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:10.997 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:10.997 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:10.997 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:10.997 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:10.997 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:10.997 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:10.997 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.997 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.997 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:11.563 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:11.563 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:11.563 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=645797f7-2f78-43c7-bfca-283b7d15fc1b 00:07:11.563 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645797f7-2f78-43c7-bfca-283b7d15fc1b 00:07:11.563 03:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:11.821 03:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:11.821 03:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:11.821 03:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 645797f7-2f78-43c7-bfca-283b7d15fc1b lvol 150 00:07:12.079 03:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1d0f8776-2e78-4d1f-9702-2425f2f73576 00:07:12.079 03:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:12.336 03:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:12.336 [2024-12-10 03:55:06.709922] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:12.337 [2024-12-10 03:55:06.710013] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:12.337 true 00:07:12.595 03:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645797f7-2f78-43c7-bfca-283b7d15fc1b 00:07:12.595 03:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:12.853 03:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:12.853 03:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:13.110 03:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1d0f8776-2e78-4d1f-9702-2425f2f73576 00:07:13.367 03:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:13.625 [2024-12-10 03:55:07.789158] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.625 03:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.884 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2298006 00:07:13.884 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:13.884 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:13.884 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2298006 /var/tmp/bdevperf.sock 00:07:13.884 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2298006 ']' 00:07:13.884 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:13.884 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.884 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:13.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:13.884 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.884 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:13.884 [2024-12-10 03:55:08.118285] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:13.884 [2024-12-10 03:55:08.118368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2298006 ] 00:07:13.884 [2024-12-10 03:55:08.184041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.884 [2024-12-10 03:55:08.242768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.142 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.142 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:14.142 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:14.707 Nvme0n1 00:07:14.707 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:14.965 [ 00:07:14.965 { 00:07:14.965 "name": "Nvme0n1", 00:07:14.965 "aliases": [ 00:07:14.965 "1d0f8776-2e78-4d1f-9702-2425f2f73576" 00:07:14.965 ], 00:07:14.965 "product_name": "NVMe disk", 00:07:14.965 "block_size": 4096, 00:07:14.965 "num_blocks": 38912, 00:07:14.965 "uuid": "1d0f8776-2e78-4d1f-9702-2425f2f73576", 00:07:14.965 "numa_id": 0, 00:07:14.965 "assigned_rate_limits": { 00:07:14.965 "rw_ios_per_sec": 0, 00:07:14.965 "rw_mbytes_per_sec": 0, 00:07:14.965 "r_mbytes_per_sec": 0, 00:07:14.965 "w_mbytes_per_sec": 0 00:07:14.965 }, 00:07:14.965 "claimed": false, 00:07:14.965 "zoned": false, 00:07:14.965 "supported_io_types": { 00:07:14.965 "read": true, 00:07:14.965 "write": true, 00:07:14.965 "unmap": true, 00:07:14.965 "flush": true, 00:07:14.965 "reset": true, 00:07:14.965 "nvme_admin": true, 00:07:14.965 "nvme_io": true, 00:07:14.965 "nvme_io_md": false, 00:07:14.965 "write_zeroes": true, 00:07:14.965 "zcopy": false, 00:07:14.965 "get_zone_info": false, 00:07:14.965 "zone_management": false, 00:07:14.965 "zone_append": false, 00:07:14.965 "compare": true, 00:07:14.965 "compare_and_write": true, 00:07:14.965 "abort": true, 00:07:14.965 "seek_hole": false, 00:07:14.965 "seek_data": false, 00:07:14.965 "copy": true, 00:07:14.965 "nvme_iov_md": false 00:07:14.965 }, 00:07:14.965 "memory_domains": [ 00:07:14.965 { 00:07:14.965 "dma_device_id": "system", 00:07:14.965 "dma_device_type": 1 00:07:14.965 } 00:07:14.965 ], 00:07:14.965 "driver_specific": { 00:07:14.965 "nvme": [ 00:07:14.965 { 00:07:14.965 "trid": { 00:07:14.965 "trtype": "TCP", 00:07:14.965 "adrfam": "IPv4", 00:07:14.965 "traddr": "10.0.0.2", 00:07:14.965 "trsvcid": "4420", 00:07:14.965 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:14.965 }, 00:07:14.965 "ctrlr_data": { 00:07:14.965 "cntlid": 1, 00:07:14.965 "vendor_id": "0x8086", 00:07:14.965 "model_number": "SPDK bdev Controller", 00:07:14.965 "serial_number": "SPDK0", 00:07:14.965 "firmware_revision": "25.01", 00:07:14.965 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:14.965 "oacs": { 00:07:14.965 "security": 0, 00:07:14.965 "format": 0, 00:07:14.965 "firmware": 0, 00:07:14.965 "ns_manage": 0 00:07:14.965 }, 00:07:14.965 "multi_ctrlr": true, 00:07:14.965 "ana_reporting": false 00:07:14.965 }, 00:07:14.965 "vs": { 00:07:14.965 "nvme_version": "1.3" 00:07:14.965 }, 00:07:14.965 "ns_data": { 00:07:14.965 "id": 1, 00:07:14.965 "can_share": true 00:07:14.965 } 00:07:14.965 } 00:07:14.965 ], 00:07:14.965 "mp_policy": "active_passive" 00:07:14.965 } 00:07:14.965 } 00:07:14.965 ] 00:07:14.965 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2298142 00:07:14.965 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:14.965 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:14.965 Running I/O for 10 seconds... 00:07:15.899 Latency(us) 00:07:15.899 [2024-12-10T02:55:10.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.900 Nvme0n1 : 1.00 14899.00 58.20 0.00 0.00 0.00 0.00 0.00 00:07:15.900 [2024-12-10T02:55:10.289Z] =================================================================================================================== 00:07:15.900 [2024-12-10T02:55:10.289Z] Total : 14899.00 58.20 0.00 0.00 0.00 0.00 0.00 00:07:15.900 00:07:16.834 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 645797f7-2f78-43c7-bfca-283b7d15fc1b 00:07:17.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.092 Nvme0n1 : 2.00 15101.00 58.99 0.00 0.00 0.00 0.00 0.00 00:07:17.092 [2024-12-10T02:55:11.481Z] =================================================================================================================== 00:07:17.092 [2024-12-10T02:55:11.481Z] Total : 15101.00 58.99 0.00 0.00 0.00 0.00 0.00 00:07:17.092 00:07:17.092 true 00:07:17.092 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645797f7-2f78-43c7-bfca-283b7d15fc1b 00:07:17.092 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:17.350 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:17.350 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:17.350 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2298142 00:07:17.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.916 Nvme0n1 : 3.00 15253.67 59.58 0.00 0.00 0.00 0.00 0.00 00:07:17.916 [2024-12-10T02:55:12.305Z] =================================================================================================================== 00:07:17.916 [2024-12-10T02:55:12.305Z] Total : 15253.67 59.58 0.00 0.00 0.00 0.00 0.00 00:07:17.916 00:07:18.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.849 Nvme0n1 : 4.00 15409.00 60.19 0.00 0.00 0.00 0.00 0.00 00:07:18.849 [2024-12-10T02:55:13.238Z] =================================================================================================================== 00:07:18.849 [2024-12-10T02:55:13.238Z] Total : 15409.00 60.19 0.00 0.00 0.00 0.00 0.00 00:07:18.849 00:07:20.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.222 Nvme0n1 : 5.00 15480.20 60.47 0.00 0.00 0.00 0.00 0.00 00:07:20.222 [2024-12-10T02:55:14.611Z] =================================================================================================================== 00:07:20.222 [2024-12-10T02:55:14.611Z] Total : 15480.20 60.47 0.00 0.00 0.00 0.00 0.00 00:07:20.222 00:07:21.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.154 Nvme0n1 : 6.00 15556.67 60.77 0.00 0.00 0.00 0.00 0.00 00:07:21.154 [2024-12-10T02:55:15.543Z] =================================================================================================================== 00:07:21.154 [2024-12-10T02:55:15.543Z] Total : 15556.67 60.77 0.00 0.00 0.00 0.00 0.00 00:07:21.154 00:07:22.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.087 Nvme0n1 : 7.00 15611.14 60.98 0.00 0.00 0.00 0.00 0.00 00:07:22.087 [2024-12-10T02:55:16.476Z] =================================================================================================================== 00:07:22.087 [2024-12-10T02:55:16.476Z] Total : 15611.14 60.98 0.00 0.00 0.00 0.00 0.00 00:07:22.087 00:07:23.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.022 Nvme0n1 : 8.00 15660.00 61.17 0.00 0.00 0.00 0.00 0.00 00:07:23.022 [2024-12-10T02:55:17.411Z] =================================================================================================================== 00:07:23.022 [2024-12-10T02:55:17.411Z] Total : 15660.00 61.17 0.00 0.00 0.00 0.00 0.00 00:07:23.022 00:07:23.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.957 Nvme0n1 : 9.00 15698.00 61.32 0.00 0.00 0.00 0.00 0.00 00:07:23.957 [2024-12-10T02:55:18.346Z] =================================================================================================================== 00:07:23.957 [2024-12-10T02:55:18.346Z] Total : 15698.00 61.32 0.00 0.00 0.00 0.00 0.00 00:07:23.957 00:07:24.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.901 Nvme0n1 : 10.00 15722.10 61.41 0.00 0.00 0.00 0.00 0.00 00:07:24.901 [2024-12-10T02:55:19.290Z] =================================================================================================================== 00:07:24.901 [2024-12-10T02:55:19.290Z] Total : 15722.10 61.41 0.00 0.00 0.00 0.00 0.00 00:07:24.901 00:07:24.901 00:07:24.901 Latency(us) 00:07:24.901 [2024-12-10T02:55:19.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.901 Nvme0n1 : 10.00 15721.57 61.41 0.00 0.00 8136.52 3519.53 16408.27 00:07:24.901 [2024-12-10T02:55:19.290Z] =================================================================================================================== 00:07:24.901 [2024-12-10T02:55:19.290Z] Total : 15721.57 61.41 0.00 0.00 8136.52 3519.53 16408.27 00:07:24.901 { 00:07:24.901 "results": [ 00:07:24.901 { 00:07:24.901 "job": "Nvme0n1", 00:07:24.901 "core_mask": "0x2", 00:07:24.901 "workload": "randwrite", 00:07:24.901 "status": "finished", 00:07:24.901 "queue_depth": 128, 00:07:24.901 "io_size": 4096, 00:07:24.901 "runtime": 10.004406, 00:07:24.901 "iops": 15721.573074903197, 00:07:24.901 "mibps": 61.412394823840614, 00:07:24.901 "io_failed": 0, 00:07:24.901 "io_timeout": 0, 00:07:24.901 "avg_latency_us": 8136.523888925389, 00:07:24.901 "min_latency_us": 3519.525925925926, 00:07:24.901 "max_latency_us": 16408.27259259259 00:07:24.901 } 00:07:24.901 ], 00:07:24.901 "core_count": 1 00:07:24.901 } 00:07:24.901 03:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2298006 00:07:24.901 03:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2298006 ']' 00:07:24.901 03:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2298006 00:07:24.901 03:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:24.901 03:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.901 03:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2298006 00:07:25.162 03:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:25.162 03:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:25.162 03:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2298006' 00:07:25.162 killing process with pid 2298006 00:07:25.162 03:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2298006 00:07:25.162 Received shutdown signal, test time was about 10.000000 seconds 00:07:25.162 00:07:25.162 Latency(us) 00:07:25.162 [2024-12-10T02:55:19.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.162 [2024-12-10T02:55:19.551Z] =================================================================================================================== 00:07:25.162 [2024-12-10T02:55:19.551Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:25.162 03:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2298006 00:07:25.162 03:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:25.419 03:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:25.984 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645797f7-2f78-43c7-bfca-283b7d15fc1b 00:07:25.984 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:25.984 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:25.984 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:25.984 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:26.551 [2024-12-10 03:55:20.626455] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645797f7-2f78-43c7-bfca-283b7d15fc1b 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645797f7-2f78-43c7-bfca-283b7d15fc1b 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645797f7-2f78-43c7-bfca-283b7d15fc1b 00:07:26.551 request: 00:07:26.551 { 00:07:26.551 "uuid": "645797f7-2f78-43c7-bfca-283b7d15fc1b", 00:07:26.551 "method": "bdev_lvol_get_lvstores", 00:07:26.551 "req_id": 1 00:07:26.551 } 00:07:26.551 Got JSON-RPC error response 00:07:26.551 response: 00:07:26.551 { 00:07:26.551 "code": -19, 00:07:26.551 "message": "No such device" 00:07:26.551 } 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.551 03:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:27.117 aio_bdev 00:07:27.117 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1d0f8776-2e78-4d1f-9702-2425f2f73576 00:07:27.117 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=1d0f8776-2e78-4d1f-9702-2425f2f73576 00:07:27.117 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:27.117 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:27.117 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:27.117 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:27.117 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:27.117 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1d0f8776-2e78-4d1f-9702-2425f2f73576 -t 2000 00:07:27.375 [ 00:07:27.375 { 00:07:27.375 "name": "1d0f8776-2e78-4d1f-9702-2425f2f73576", 00:07:27.375 "aliases": [ 00:07:27.375 "lvs/lvol" 00:07:27.375 ], 00:07:27.375 "product_name": "Logical Volume", 00:07:27.375 "block_size": 4096, 00:07:27.375 "num_blocks": 38912, 00:07:27.375 "uuid": "1d0f8776-2e78-4d1f-9702-2425f2f73576", 00:07:27.375 "assigned_rate_limits": { 00:07:27.375 "rw_ios_per_sec": 0, 00:07:27.375 "rw_mbytes_per_sec": 0, 00:07:27.375 "r_mbytes_per_sec": 0, 00:07:27.375 "w_mbytes_per_sec": 0 00:07:27.375 }, 00:07:27.375 "claimed": false, 00:07:27.375 "zoned": false, 00:07:27.375 "supported_io_types": { 00:07:27.375 "read": true, 00:07:27.375 "write": true, 00:07:27.375 "unmap": true, 00:07:27.375 "flush": false, 00:07:27.375 "reset": true, 00:07:27.375 "nvme_admin": false, 00:07:27.375 "nvme_io": false, 00:07:27.375 "nvme_io_md": false, 00:07:27.375 "write_zeroes": true, 00:07:27.375 "zcopy": false, 00:07:27.375 "get_zone_info": false, 00:07:27.375 "zone_management": false, 00:07:27.375 "zone_append": false, 00:07:27.375 "compare": false, 00:07:27.375 "compare_and_write": false, 00:07:27.375 "abort": false, 00:07:27.375 "seek_hole": true, 00:07:27.375 "seek_data": true, 00:07:27.375 "copy": false, 00:07:27.375 "nvme_iov_md": false 00:07:27.375 }, 00:07:27.375 "driver_specific": { 00:07:27.375 "lvol": { 00:07:27.375 "lvol_store_uuid": "645797f7-2f78-43c7-bfca-283b7d15fc1b", 00:07:27.375 "base_bdev": "aio_bdev", 00:07:27.375 "thin_provision": false, 00:07:27.375 "num_allocated_clusters": 38, 00:07:27.375 "snapshot": false, 00:07:27.375 "clone": false, 00:07:27.375 "esnap_clone": false 00:07:27.375 } 00:07:27.375 } 00:07:27.375 } 00:07:27.375 ] 00:07:27.375 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:27.375 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645797f7-2f78-43c7-bfca-283b7d15fc1b 00:07:27.375 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:27.632 03:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:27.920 03:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645797f7-2f78-43c7-bfca-283b7d15fc1b 00:07:27.921 03:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:28.203 03:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:28.203 03:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1d0f8776-2e78-4d1f-9702-2425f2f73576 00:07:28.203 03:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 645797f7-2f78-43c7-bfca-283b7d15fc1b 00:07:28.769 03:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:28.769 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:28.769 00:07:28.769 real 0m17.771s 00:07:28.769 user 0m17.337s 00:07:28.769 sys 0m1.789s 00:07:28.769 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.769 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:28.769 ************************************ 00:07:28.769 END TEST lvs_grow_clean 00:07:28.769 ************************************ 00:07:29.027 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:29.027 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.027 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.027 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.027 ************************************ 00:07:29.027 START TEST lvs_grow_dirty 00:07:29.027 ************************************ 00:07:29.027 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:29.027 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:29.027 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:29.027 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:29.027 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:29.027 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:29.027 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:29.027 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:29.027 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:29.027 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:29.285 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:29.285 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:29.543 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f 00:07:29.543 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f 00:07:29.543 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:29.801 03:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:29.801 03:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:29.801 03:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f lvol 150 00:07:30.059 03:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=701af46d-43af-4ce0-be8e-3154efd336e7 00:07:30.059 03:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:30.059 03:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:30.317 [2024-12-10 03:55:24.573033] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:30.317 [2024-12-10 03:55:24.573126] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:30.317 true 00:07:30.317 03:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f 00:07:30.317 03:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:30.575 03:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:30.575 03:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:30.832 03:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 701af46d-43af-4ce0-be8e-3154efd336e7 00:07:31.089 03:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:31.347 [2024-12-10 03:55:25.640260] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.347 03:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:31.606 03:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2300199 00:07:31.606 03:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:31.606 03:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2300199 /var/tmp/bdevperf.sock 00:07:31.606 03:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:31.606 03:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2300199 ']' 00:07:31.606 03:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:31.606 03:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.606 03:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:31.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:31.606 03:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.606 03:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:31.606 [2024-12-10 03:55:25.971817] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:31.606 [2024-12-10 03:55:25.971906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2300199 ] 00:07:31.864 [2024-12-10 03:55:26.038716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.864 [2024-12-10 03:55:26.095513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.864 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.864 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:31.864 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:32.429 Nvme0n1 00:07:32.429 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:32.687 [ 00:07:32.687 { 00:07:32.687 "name": "Nvme0n1", 00:07:32.687 "aliases": [ 00:07:32.687 "701af46d-43af-4ce0-be8e-3154efd336e7" 00:07:32.687 ], 00:07:32.687 "product_name": "NVMe disk", 00:07:32.687 "block_size": 4096, 00:07:32.687 "num_blocks": 38912, 00:07:32.687 "uuid": "701af46d-43af-4ce0-be8e-3154efd336e7", 00:07:32.687 "numa_id": 0, 00:07:32.687 "assigned_rate_limits": { 00:07:32.687 "rw_ios_per_sec": 0, 00:07:32.687 "rw_mbytes_per_sec": 0, 00:07:32.687 "r_mbytes_per_sec": 0, 00:07:32.687 "w_mbytes_per_sec": 0 00:07:32.687 }, 00:07:32.687 "claimed": false, 00:07:32.687 "zoned": false, 00:07:32.687 "supported_io_types": { 00:07:32.687 "read": true, 00:07:32.687 "write": true, 00:07:32.687 "unmap": true, 00:07:32.687 "flush": true, 00:07:32.687 "reset": true, 00:07:32.687 "nvme_admin": true, 00:07:32.687 "nvme_io": true, 00:07:32.687 "nvme_io_md": false, 00:07:32.687 "write_zeroes": true, 00:07:32.687 "zcopy": false, 00:07:32.687 "get_zone_info": false, 00:07:32.687 "zone_management": false, 00:07:32.687 "zone_append": false, 00:07:32.687 "compare": true, 00:07:32.687 "compare_and_write": true, 00:07:32.687 "abort": true, 00:07:32.687 "seek_hole": false, 00:07:32.687 "seek_data": false, 00:07:32.687 "copy": true, 00:07:32.687 "nvme_iov_md": false 00:07:32.687 }, 00:07:32.687 "memory_domains": [ 00:07:32.687 { 00:07:32.687 "dma_device_id": "system", 00:07:32.687 "dma_device_type": 1 00:07:32.687 } 00:07:32.687 ], 00:07:32.687 "driver_specific": { 00:07:32.687 "nvme": [ 00:07:32.687 { 00:07:32.687 "trid": { 00:07:32.687 "trtype": "TCP", 00:07:32.687 "adrfam": "IPv4", 00:07:32.687 "traddr": "10.0.0.2", 00:07:32.687 "trsvcid": "4420", 00:07:32.687 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:32.687 }, 00:07:32.687 "ctrlr_data": { 00:07:32.687 "cntlid": 1, 00:07:32.687 "vendor_id": "0x8086", 00:07:32.687 "model_number": "SPDK bdev Controller", 00:07:32.687 "serial_number": "SPDK0", 00:07:32.687 "firmware_revision": "25.01", 00:07:32.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:32.687 "oacs": { 00:07:32.687 "security": 0, 00:07:32.687 "format": 0, 00:07:32.687 "firmware": 0, 00:07:32.687 "ns_manage": 0 00:07:32.687 }, 00:07:32.687 "multi_ctrlr": true, 00:07:32.687 "ana_reporting": false 00:07:32.687 }, 00:07:32.687 "vs": { 00:07:32.687 "nvme_version": "1.3" 00:07:32.687 }, 00:07:32.687 "ns_data": { 00:07:32.687 "id": 1, 00:07:32.687 "can_share": true 00:07:32.687 } 00:07:32.687 } 00:07:32.687 ], 00:07:32.687 "mp_policy": "active_passive" 00:07:32.687 } 00:07:32.687 } 00:07:32.687 ] 00:07:32.687 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2300331 00:07:32.687 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:32.687 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:32.687 Running I/O for 10 seconds... 00:07:33.622 Latency(us) 00:07:33.622 [2024-12-10T02:55:28.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.622 Nvme0n1 : 1.00 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:07:33.622 [2024-12-10T02:55:28.011Z] =================================================================================================================== 00:07:33.622 [2024-12-10T02:55:28.011Z] Total : 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:07:33.622 00:07:34.555 03:55:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f 00:07:34.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.813 Nvme0n1 : 2.00 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 00:07:34.813 [2024-12-10T02:55:29.202Z] =================================================================================================================== 00:07:34.813 [2024-12-10T02:55:29.202Z] Total : 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 00:07:34.813 00:07:34.813 true 00:07:34.813 03:55:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f 00:07:34.813 03:55:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:35.070 03:55:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:35.070 03:55:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:35.070 03:55:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2300331 00:07:35.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.636 Nvme0n1 : 3.00 15410.67 60.20 0.00 0.00 0.00 0.00 0.00 00:07:35.636 [2024-12-10T02:55:30.025Z] =================================================================================================================== 00:07:35.636 [2024-12-10T02:55:30.025Z] Total : 15410.67 60.20 0.00 0.00 0.00 0.00 0.00 00:07:35.636 00:07:36.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.600 Nvme0n1 : 4.00 15399.75 60.16 0.00 0.00 0.00 0.00 0.00 00:07:36.600 [2024-12-10T02:55:30.989Z] =================================================================================================================== 00:07:36.600 [2024-12-10T02:55:30.989Z] Total : 15399.75 60.16 0.00 0.00 0.00 0.00 0.00 00:07:36.600 00:07:37.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.974 Nvme0n1 : 5.00 15494.80 60.53 0.00 0.00 0.00 0.00 0.00 00:07:37.974 [2024-12-10T02:55:32.363Z] =================================================================================================================== 00:07:37.974 [2024-12-10T02:55:32.363Z] Total : 15494.80 60.53 0.00 0.00 0.00 0.00 0.00 00:07:37.974 00:07:38.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.909 Nvme0n1 : 6.00 15558.17 60.77 0.00 0.00 0.00 0.00 0.00 00:07:38.909 [2024-12-10T02:55:33.298Z] =================================================================================================================== 00:07:38.909 [2024-12-10T02:55:33.298Z] Total : 15558.17 60.77 0.00 0.00 0.00 0.00 0.00 00:07:38.909 00:07:39.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.844 Nvme0n1 : 7.00 15612.57 60.99 0.00 0.00 0.00 0.00 0.00 00:07:39.844 [2024-12-10T02:55:34.233Z] =================================================================================================================== 00:07:39.844 [2024-12-10T02:55:34.233Z] Total : 15612.57 60.99 0.00 0.00 0.00 0.00 0.00 00:07:39.844 00:07:40.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.777 Nvme0n1 : 8.00 15669.12 61.21 0.00 0.00 0.00 0.00 0.00 00:07:40.777 [2024-12-10T02:55:35.166Z] =================================================================================================================== 00:07:40.777 [2024-12-10T02:55:35.166Z] Total : 15669.12 61.21 0.00 0.00 0.00 0.00 0.00 00:07:40.777 00:07:41.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.711 Nvme0n1 : 9.00 15706.11 61.35 0.00 0.00 0.00 0.00 0.00 00:07:41.711 [2024-12-10T02:55:36.100Z] =================================================================================================================== 00:07:41.711 [2024-12-10T02:55:36.100Z] Total : 15706.11 61.35 0.00 0.00 0.00 0.00 0.00 00:07:41.711 00:07:42.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.644 Nvme0n1 : 10.00 15735.70 61.47 0.00 0.00 0.00 0.00 0.00 00:07:42.644 [2024-12-10T02:55:37.033Z] =================================================================================================================== 00:07:42.644 [2024-12-10T02:55:37.033Z] Total : 15735.70 61.47 0.00 0.00 0.00 0.00 0.00 00:07:42.644 00:07:42.644 00:07:42.644 Latency(us) 00:07:42.644 [2024-12-10T02:55:37.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.644 Nvme0n1 : 10.01 15735.49 61.47 0.00 0.00 8129.77 3859.34 21359.88 00:07:42.644 [2024-12-10T02:55:37.033Z] =================================================================================================================== 00:07:42.644 [2024-12-10T02:55:37.033Z] Total : 15735.49 61.47 0.00 0.00 8129.77 3859.34 21359.88 00:07:42.644 { 00:07:42.644 "results": [ 00:07:42.644 { 00:07:42.644 "job": "Nvme0n1", 00:07:42.644 "core_mask": "0x2", 00:07:42.644 "workload": "randwrite", 00:07:42.644 "status": "finished", 00:07:42.644 "queue_depth": 128, 00:07:42.644 "io_size": 4096, 00:07:42.644 "runtime": 10.008265, 00:07:42.644 "iops": 15735.494613701776, 00:07:42.644 "mibps": 61.46677583477256, 00:07:42.644 "io_failed": 0, 00:07:42.644 "io_timeout": 0, 00:07:42.644 "avg_latency_us": 8129.773093329288, 00:07:42.644 "min_latency_us": 3859.342222222222, 00:07:42.644 "max_latency_us": 21359.881481481483 00:07:42.644 } 00:07:42.644 ], 00:07:42.644 "core_count": 1 00:07:42.644 } 00:07:42.644 03:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2300199 00:07:42.644 03:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2300199 ']' 00:07:42.644 03:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2300199 00:07:42.644 03:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:42.644 03:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.644 03:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2300199 00:07:42.902 03:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:42.902 03:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:42.902 03:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2300199' 00:07:42.902 killing process with pid 2300199 00:07:42.902 03:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2300199 00:07:42.902 Received shutdown signal, test time was about 10.000000 seconds 00:07:42.902 00:07:42.902 Latency(us) 00:07:42.902 [2024-12-10T02:55:37.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.902 [2024-12-10T02:55:37.291Z] =================================================================================================================== 00:07:42.902 [2024-12-10T02:55:37.291Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:42.902 03:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2300199 00:07:42.902 03:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:43.159 03:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:43.725 03:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f 00:07:43.725 03:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:43.725 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:43.725 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:43.725 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2297564 00:07:43.725 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2297564 00:07:43.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2297564 Killed "${NVMF_APP[@]}" "$@" 00:07:43.983 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:43.983 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:43.983 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:43.983 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:43.983 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:43.983 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2301647 00:07:43.983 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:43.983 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2301647 00:07:43.983 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2301647 ']' 00:07:43.983 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.983 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.983 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.983 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.983 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:43.983 [2024-12-10 03:55:38.169917] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:43.983 [2024-12-10 03:55:38.170011] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.983 [2024-12-10 03:55:38.244296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.983 [2024-12-10 03:55:38.300613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.983 [2024-12-10 03:55:38.300672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.983 [2024-12-10 03:55:38.300702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.983 [2024-12-10 03:55:38.300715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.983 [2024-12-10 03:55:38.300726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.983 [2024-12-10 03:55:38.301332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.241 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.241 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:44.241 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:44.241 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:44.241 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:44.241 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.241 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:44.499 [2024-12-10 03:55:38.689132] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:44.499 [2024-12-10 03:55:38.689287] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:44.499 [2024-12-10 03:55:38.689335] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:44.499 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:44.499 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 701af46d-43af-4ce0-be8e-3154efd336e7 00:07:44.499 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=701af46d-43af-4ce0-be8e-3154efd336e7 00:07:44.499 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:44.499 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:44.499 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:44.499 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:44.499 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:44.758 03:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 701af46d-43af-4ce0-be8e-3154efd336e7 -t 2000 00:07:45.015 [ 00:07:45.015 { 00:07:45.015 "name": "701af46d-43af-4ce0-be8e-3154efd336e7", 00:07:45.015 "aliases": [ 00:07:45.016 "lvs/lvol" 00:07:45.016 ], 00:07:45.016 "product_name": "Logical Volume", 00:07:45.016 "block_size": 4096, 00:07:45.016 "num_blocks": 38912, 00:07:45.016 "uuid": "701af46d-43af-4ce0-be8e-3154efd336e7", 00:07:45.016 "assigned_rate_limits": { 00:07:45.016 "rw_ios_per_sec": 0, 00:07:45.016 "rw_mbytes_per_sec": 0, 00:07:45.016 "r_mbytes_per_sec": 0, 00:07:45.016 "w_mbytes_per_sec": 0 00:07:45.016 }, 00:07:45.016 "claimed": false, 00:07:45.016 "zoned": false, 00:07:45.016 "supported_io_types": { 00:07:45.016 "read": true, 00:07:45.016 "write": true, 00:07:45.016 "unmap": true, 00:07:45.016 "flush": false, 00:07:45.016 "reset": true, 00:07:45.016 "nvme_admin": false, 00:07:45.016 "nvme_io": false, 00:07:45.016 "nvme_io_md": false, 00:07:45.016 "write_zeroes": true, 00:07:45.016 "zcopy": false, 00:07:45.016 "get_zone_info": false, 00:07:45.016 "zone_management": false, 00:07:45.016 "zone_append": false, 00:07:45.016 "compare": false, 00:07:45.016 "compare_and_write": false, 00:07:45.016 "abort": false, 00:07:45.016 "seek_hole": true, 00:07:45.016 "seek_data": true, 00:07:45.016 "copy": false, 00:07:45.016 "nvme_iov_md": false 00:07:45.016 }, 00:07:45.016 "driver_specific": { 00:07:45.016 "lvol": { 00:07:45.016 "lvol_store_uuid": "6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f", 00:07:45.016 "base_bdev": "aio_bdev", 00:07:45.016 "thin_provision": false, 00:07:45.016 "num_allocated_clusters": 38, 00:07:45.016 "snapshot": false, 00:07:45.016 "clone": false, 00:07:45.016 "esnap_clone": false 00:07:45.016 } 00:07:45.016 } 00:07:45.016 } 00:07:45.016 ] 00:07:45.016 03:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:45.016 03:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f 00:07:45.016 03:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:45.274 03:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:45.274 03:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f 00:07:45.274 03:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:45.533 03:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:45.533 03:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:45.791 [2024-12-10 03:55:40.046738] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:45.791 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f 00:07:45.791 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:45.791 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f 00:07:45.791 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.791 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.791 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.791 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.791 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.791 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.791 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.791 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:45.791 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f 00:07:46.050 request: 00:07:46.050 { 00:07:46.050 "uuid": "6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f", 00:07:46.050 "method": "bdev_lvol_get_lvstores", 00:07:46.050 "req_id": 1 00:07:46.050 } 00:07:46.050 Got JSON-RPC error response 00:07:46.050 response: 00:07:46.050 { 00:07:46.050 "code": -19, 00:07:46.050 "message": "No such device" 00:07:46.050 } 00:07:46.050 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:46.050 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:46.050 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:46.050 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:46.050 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:46.308 aio_bdev 00:07:46.308 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 701af46d-43af-4ce0-be8e-3154efd336e7 00:07:46.308 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=701af46d-43af-4ce0-be8e-3154efd336e7 00:07:46.308 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:46.308 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:46.308 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:46.308 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:46.308 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:46.566 03:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 701af46d-43af-4ce0-be8e-3154efd336e7 -t 2000 00:07:46.825 [ 00:07:46.825 { 00:07:46.825 "name": "701af46d-43af-4ce0-be8e-3154efd336e7", 00:07:46.825 "aliases": [ 00:07:46.825 "lvs/lvol" 00:07:46.825 ], 00:07:46.825 "product_name": "Logical Volume", 00:07:46.825 "block_size": 4096, 00:07:46.825 "num_blocks": 38912, 00:07:46.825 "uuid": "701af46d-43af-4ce0-be8e-3154efd336e7", 00:07:46.825 "assigned_rate_limits": { 00:07:46.825 "rw_ios_per_sec": 0, 00:07:46.825 "rw_mbytes_per_sec": 0, 00:07:46.825 "r_mbytes_per_sec": 0, 00:07:46.825 "w_mbytes_per_sec": 0 00:07:46.825 }, 00:07:46.825 "claimed": false, 00:07:46.825 "zoned": false, 00:07:46.825 "supported_io_types": { 00:07:46.825 "read": true, 00:07:46.825 "write": true, 00:07:46.825 "unmap": true, 00:07:46.825 "flush": false, 00:07:46.825 "reset": true, 00:07:46.825 "nvme_admin": false, 00:07:46.825 "nvme_io": false, 00:07:46.825 "nvme_io_md": false, 00:07:46.825 "write_zeroes": true, 00:07:46.825 "zcopy": false, 00:07:46.825 "get_zone_info": false, 00:07:46.825 "zone_management": false, 00:07:46.825 "zone_append": false, 00:07:46.825 "compare": false, 00:07:46.825 "compare_and_write": false, 00:07:46.825 "abort": false, 00:07:46.825 "seek_hole": true, 00:07:46.825 "seek_data": true, 00:07:46.825 "copy": false, 00:07:46.825 "nvme_iov_md": false 00:07:46.825 }, 00:07:46.825 "driver_specific": { 00:07:46.825 "lvol": { 00:07:46.825 "lvol_store_uuid": "6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f", 00:07:46.825 "base_bdev": "aio_bdev", 00:07:46.825 "thin_provision": false, 00:07:46.825 "num_allocated_clusters": 38, 00:07:46.826 "snapshot": false, 00:07:46.826 "clone": false, 00:07:46.826 "esnap_clone": false 00:07:46.826 } 00:07:46.826 } 00:07:46.826 } 00:07:46.826 ] 00:07:46.826 03:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:46.826 03:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f 00:07:46.826 03:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:47.084 03:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:47.084 03:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f 00:07:47.084 03:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:47.343 03:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:47.343 03:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 701af46d-43af-4ce0-be8e-3154efd336e7 00:07:47.909 03:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6fa54bdf-94f4-46ef-b8f0-1ed2406b3c4f 00:07:47.909 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:48.167 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:48.425 00:07:48.425 real 0m19.376s 00:07:48.425 user 0m49.292s 00:07:48.425 sys 0m4.454s 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:48.425 ************************************ 00:07:48.425 END TEST lvs_grow_dirty 00:07:48.425 ************************************ 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:48.425 nvmf_trace.0 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:48.425 rmmod nvme_tcp 00:07:48.425 rmmod nvme_fabrics 00:07:48.425 rmmod nvme_keyring 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2301647 ']' 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2301647 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2301647 ']' 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2301647 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2301647 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2301647' 00:07:48.425 killing process with pid 2301647 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2301647 00:07:48.425 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2301647 00:07:48.683 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:48.683 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:48.683 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:48.683 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:48.683 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:48.683 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:48.683 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:48.683 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:48.683 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:48.683 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.683 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.683 03:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.592 03:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:50.592 00:07:50.592 real 0m42.640s 00:07:50.592 user 1m12.635s 00:07:50.592 sys 0m8.246s 00:07:50.592 03:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.592 03:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:50.592 ************************************ 00:07:50.592 END TEST nvmf_lvs_grow 00:07:50.592 ************************************ 00:07:50.592 03:55:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:50.592 03:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:50.592 03:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.592 03:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:50.852 ************************************ 00:07:50.852 START TEST nvmf_bdev_io_wait 00:07:50.852 ************************************ 00:07:50.852 03:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:50.852 * Looking for test storage... 00:07:50.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.852 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:50.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.853 --rc genhtml_branch_coverage=1 00:07:50.853 --rc genhtml_function_coverage=1 00:07:50.853 --rc genhtml_legend=1 00:07:50.853 --rc geninfo_all_blocks=1 00:07:50.853 --rc geninfo_unexecuted_blocks=1 00:07:50.853 00:07:50.853 ' 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.853 --rc genhtml_branch_coverage=1 00:07:50.853 --rc genhtml_function_coverage=1 00:07:50.853 --rc genhtml_legend=1 00:07:50.853 --rc geninfo_all_blocks=1 00:07:50.853 --rc geninfo_unexecuted_blocks=1 00:07:50.853 00:07:50.853 ' 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.853 --rc genhtml_branch_coverage=1 00:07:50.853 --rc genhtml_function_coverage=1 00:07:50.853 --rc genhtml_legend=1 00:07:50.853 --rc geninfo_all_blocks=1 00:07:50.853 --rc geninfo_unexecuted_blocks=1 00:07:50.853 00:07:50.853 ' 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.853 --rc genhtml_branch_coverage=1 00:07:50.853 --rc genhtml_function_coverage=1 00:07:50.853 --rc genhtml_legend=1 00:07:50.853 --rc geninfo_all_blocks=1 00:07:50.853 --rc geninfo_unexecuted_blocks=1 00:07:50.853 00:07:50.853 ' 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:50.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:50.853 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:53.390 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:53.390 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:53.390 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:53.390 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.390 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:53.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:07:53.391 00:07:53.391 --- 10.0.0.2 ping statistics --- 00:07:53.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.391 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:07:53.391 00:07:53.391 --- 10.0.0.1 ping statistics --- 00:07:53.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.391 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2304208 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2304208 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2304208 ']' 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.391 [2024-12-10 03:55:47.506118] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:53.391 [2024-12-10 03:55:47.506198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.391 [2024-12-10 03:55:47.577778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.391 [2024-12-10 03:55:47.635735] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.391 [2024-12-10 03:55:47.635796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.391 [2024-12-10 03:55:47.635822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.391 [2024-12-10 03:55:47.635833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.391 [2024-12-10 03:55:47.635842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.391 [2024-12-10 03:55:47.637618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.391 [2024-12-10 03:55:47.637647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.391 [2024-12-10 03:55:47.637706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.391 [2024-12-10 03:55:47.637709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.391 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.650 [2024-12-10 03:55:47.839012] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.650 Malloc0 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.650 [2024-12-10 03:55:47.892134] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2304238 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2304240 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:53.650 { 00:07:53.650 "params": { 00:07:53.650 "name": "Nvme$subsystem", 00:07:53.650 "trtype": "$TEST_TRANSPORT", 00:07:53.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.650 "adrfam": "ipv4", 00:07:53.650 "trsvcid": "$NVMF_PORT", 00:07:53.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.650 "hdgst": ${hdgst:-false}, 00:07:53.650 "ddgst": ${ddgst:-false} 00:07:53.650 }, 00:07:53.650 "method": "bdev_nvme_attach_controller" 00:07:53.650 } 00:07:53.650 EOF 00:07:53.650 )") 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2304242 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:53.650 { 00:07:53.650 "params": { 00:07:53.650 "name": "Nvme$subsystem", 00:07:53.650 "trtype": "$TEST_TRANSPORT", 00:07:53.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.650 "adrfam": "ipv4", 00:07:53.650 "trsvcid": "$NVMF_PORT", 00:07:53.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.650 "hdgst": ${hdgst:-false}, 00:07:53.650 "ddgst": ${ddgst:-false} 00:07:53.650 }, 00:07:53.650 "method": "bdev_nvme_attach_controller" 00:07:53.650 } 00:07:53.650 EOF 00:07:53.650 )") 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2304245 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:53.650 { 00:07:53.650 "params": { 00:07:53.650 "name": "Nvme$subsystem", 00:07:53.650 "trtype": "$TEST_TRANSPORT", 00:07:53.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.650 "adrfam": "ipv4", 00:07:53.650 "trsvcid": "$NVMF_PORT", 00:07:53.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.650 "hdgst": ${hdgst:-false}, 00:07:53.650 "ddgst": ${ddgst:-false} 00:07:53.650 }, 00:07:53.650 "method": "bdev_nvme_attach_controller" 00:07:53.650 } 00:07:53.650 EOF 00:07:53.650 )") 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:53.650 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:53.650 { 00:07:53.650 "params": { 00:07:53.650 "name": "Nvme$subsystem", 00:07:53.650 "trtype": "$TEST_TRANSPORT", 00:07:53.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.651 "adrfam": "ipv4", 00:07:53.651 "trsvcid": "$NVMF_PORT", 00:07:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.651 "hdgst": ${hdgst:-false}, 00:07:53.651 "ddgst": ${ddgst:-false} 00:07:53.651 }, 00:07:53.651 "method": "bdev_nvme_attach_controller" 00:07:53.651 } 00:07:53.651 EOF 00:07:53.651 )") 00:07:53.651 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:53.651 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2304238 00:07:53.651 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:53.651 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:53.651 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:53.651 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:53.651 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:53.651 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:53.651 "params": { 00:07:53.651 "name": "Nvme1", 00:07:53.651 "trtype": "tcp", 00:07:53.651 "traddr": "10.0.0.2", 00:07:53.651 "adrfam": "ipv4", 00:07:53.651 "trsvcid": "4420", 00:07:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.651 "hdgst": false, 00:07:53.651 "ddgst": false 00:07:53.651 }, 00:07:53.651 "method": "bdev_nvme_attach_controller" 00:07:53.651 }' 00:07:53.651 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:53.651 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:53.651 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:53.651 "params": { 00:07:53.651 "name": "Nvme1", 00:07:53.651 "trtype": "tcp", 00:07:53.651 "traddr": "10.0.0.2", 00:07:53.651 "adrfam": "ipv4", 00:07:53.651 "trsvcid": "4420", 00:07:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.651 "hdgst": false, 00:07:53.651 "ddgst": false 00:07:53.651 }, 00:07:53.651 "method": "bdev_nvme_attach_controller" 00:07:53.651 }' 00:07:53.651 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:53.651 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:53.651 "params": { 00:07:53.651 "name": "Nvme1", 00:07:53.651 "trtype": "tcp", 00:07:53.651 "traddr": "10.0.0.2", 00:07:53.651 "adrfam": "ipv4", 00:07:53.651 "trsvcid": "4420", 00:07:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.651 "hdgst": false, 00:07:53.651 "ddgst": false 00:07:53.651 }, 00:07:53.651 "method": "bdev_nvme_attach_controller" 00:07:53.651 }' 00:07:53.651 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:53.651 03:55:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:53.651 "params": { 00:07:53.651 "name": "Nvme1", 00:07:53.651 "trtype": "tcp", 00:07:53.651 "traddr": "10.0.0.2", 00:07:53.651 "adrfam": "ipv4", 00:07:53.651 "trsvcid": "4420", 00:07:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.651 "hdgst": false, 00:07:53.651 "ddgst": false 00:07:53.651 }, 00:07:53.651 "method": "bdev_nvme_attach_controller" 00:07:53.651 }' 00:07:53.651 [2024-12-10 03:55:47.941814] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:53.651 [2024-12-10 03:55:47.941814] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:53.651 [2024-12-10 03:55:47.941814] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:53.651 [2024-12-10 03:55:47.941814] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:53.651 [2024-12-10 03:55:47.941936] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 03:55:47.941936] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 03:55:47.941937] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 03:55:47.941936] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:53.651 --proc-type=auto ] 00:07:53.651 --proc-type=auto ] 00:07:53.651 --proc-type=auto ] 00:07:53.909 [2024-12-10 03:55:48.120400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.909 [2024-12-10 03:55:48.173696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:53.909 [2024-12-10 03:55:48.214301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.909 [2024-12-10 03:55:48.270568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:54.167 [2024-12-10 03:55:48.318943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.167 [2024-12-10 03:55:48.372835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:54.167 [2024-12-10 03:55:48.420292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.167 [2024-12-10 03:55:48.474551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:54.167 Running I/O for 1 seconds... 00:07:54.426 Running I/O for 1 seconds... 00:07:54.426 Running I/O for 1 seconds... 00:07:54.426 Running I/O for 1 seconds... 00:07:55.420 6817.00 IOPS, 26.63 MiB/s 00:07:55.420 Latency(us) 00:07:55.420 [2024-12-10T02:55:49.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.420 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:55.420 Nvme1n1 : 1.02 6841.89 26.73 0.00 0.00 18483.36 7475.96 36117.62 00:07:55.420 [2024-12-10T02:55:49.809Z] =================================================================================================================== 00:07:55.420 [2024-12-10T02:55:49.809Z] Total : 6841.89 26.73 0.00 0.00 18483.36 7475.96 36117.62 00:07:55.420 9240.00 IOPS, 36.09 MiB/s 00:07:55.420 Latency(us) 00:07:55.420 [2024-12-10T02:55:49.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.420 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:55.420 Nvme1n1 : 1.01 9276.87 36.24 0.00 0.00 13724.63 8349.77 24175.50 00:07:55.420 [2024-12-10T02:55:49.809Z] =================================================================================================================== 00:07:55.420 [2024-12-10T02:55:49.809Z] Total : 9276.87 36.24 0.00 0.00 13724.63 8349.77 24175.50 00:07:55.420 6547.00 IOPS, 25.57 MiB/s 00:07:55.420 Latency(us) 00:07:55.420 [2024-12-10T02:55:49.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.420 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:55.420 Nvme1n1 : 1.01 6646.94 25.96 0.00 0.00 19193.76 4563.25 43496.49 00:07:55.420 [2024-12-10T02:55:49.809Z] =================================================================================================================== 00:07:55.420 [2024-12-10T02:55:49.809Z] Total : 6646.94 25.96 0.00 0.00 19193.76 4563.25 43496.49 00:07:55.420 192016.00 IOPS, 750.06 MiB/s 00:07:55.420 Latency(us) 00:07:55.420 [2024-12-10T02:55:49.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.420 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:55.420 Nvme1n1 : 1.00 191656.59 748.66 0.00 0.00 664.07 297.34 1868.99 00:07:55.420 [2024-12-10T02:55:49.809Z] =================================================================================================================== 00:07:55.420 [2024-12-10T02:55:49.809Z] Total : 191656.59 748.66 0.00 0.00 664.07 297.34 1868.99 00:07:55.678 03:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2304240 00:07:55.678 03:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2304242 00:07:55.678 03:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2304245 00:07:55.678 03:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:55.678 03:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.678 03:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.678 03:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.678 03:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:55.678 03:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:55.678 03:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:55.678 03:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:55.678 03:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:55.678 03:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:55.678 03:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:55.678 03:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:55.678 rmmod nvme_tcp 00:07:55.678 rmmod nvme_fabrics 00:07:55.678 rmmod nvme_keyring 00:07:55.678 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:55.678 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:55.678 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:55.678 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2304208 ']' 00:07:55.678 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2304208 00:07:55.678 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2304208 ']' 00:07:55.678 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2304208 00:07:55.678 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:55.678 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.678 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2304208 00:07:55.678 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.938 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.938 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2304208' 00:07:55.938 killing process with pid 2304208 00:07:55.938 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2304208 00:07:55.938 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2304208 00:07:55.938 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:55.938 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:55.938 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:55.938 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:55.938 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:55.938 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:55.938 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:55.938 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:55.938 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:55.938 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.938 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.938 03:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:58.475 00:07:58.475 real 0m7.330s 00:07:58.475 user 0m16.144s 00:07:58.475 sys 0m3.558s 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:58.475 ************************************ 00:07:58.475 END TEST nvmf_bdev_io_wait 00:07:58.475 ************************************ 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.475 ************************************ 00:07:58.475 START TEST nvmf_queue_depth 00:07:58.475 ************************************ 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:58.475 * Looking for test storage... 00:07:58.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:58.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.475 --rc genhtml_branch_coverage=1 00:07:58.475 --rc genhtml_function_coverage=1 00:07:58.475 --rc genhtml_legend=1 00:07:58.475 --rc geninfo_all_blocks=1 00:07:58.475 --rc geninfo_unexecuted_blocks=1 00:07:58.475 00:07:58.475 ' 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:58.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.475 --rc genhtml_branch_coverage=1 00:07:58.475 --rc genhtml_function_coverage=1 00:07:58.475 --rc genhtml_legend=1 00:07:58.475 --rc geninfo_all_blocks=1 00:07:58.475 --rc geninfo_unexecuted_blocks=1 00:07:58.475 00:07:58.475 ' 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:58.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.475 --rc genhtml_branch_coverage=1 00:07:58.475 --rc genhtml_function_coverage=1 00:07:58.475 --rc genhtml_legend=1 00:07:58.475 --rc geninfo_all_blocks=1 00:07:58.475 --rc geninfo_unexecuted_blocks=1 00:07:58.475 00:07:58.475 ' 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:58.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.475 --rc genhtml_branch_coverage=1 00:07:58.475 --rc genhtml_function_coverage=1 00:07:58.475 --rc genhtml_legend=1 00:07:58.475 --rc geninfo_all_blocks=1 00:07:58.475 --rc geninfo_unexecuted_blocks=1 00:07:58.475 00:07:58.475 ' 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.475 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:58.476 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:00.381 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:00.381 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.381 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:00.382 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:00.382 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:00.382 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:00.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:08:00.641 00:08:00.641 --- 10.0.0.2 ping statistics --- 00:08:00.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.641 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:08:00.641 00:08:00.641 --- 10.0.0.1 ping statistics --- 00:08:00.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.641 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2306480 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2306480 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2306480 ']' 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.641 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.641 [2024-12-10 03:55:54.848292] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:00.641 [2024-12-10 03:55:54.848406] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.641 [2024-12-10 03:55:54.923539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.642 [2024-12-10 03:55:54.975952] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.642 [2024-12-10 03:55:54.976017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.642 [2024-12-10 03:55:54.976044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.642 [2024-12-10 03:55:54.976054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.642 [2024-12-10 03:55:54.976064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.642 [2024-12-10 03:55:54.976695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.901 [2024-12-10 03:55:55.118610] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.901 Malloc0 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.901 [2024-12-10 03:55:55.165519] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2306613 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2306613 /var/tmp/bdevperf.sock 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2306613 ']' 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:00.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.901 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.901 [2024-12-10 03:55:55.210975] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:00.901 [2024-12-10 03:55:55.211051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306613 ] 00:08:00.901 [2024-12-10 03:55:55.277949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.159 [2024-12-10 03:55:55.335496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.159 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.159 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:01.159 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:01.159 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.159 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.417 NVMe0n1 00:08:01.417 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.417 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:01.417 Running I/O for 10 seconds... 00:08:03.725 8206.00 IOPS, 32.05 MiB/s [2024-12-10T02:55:59.049Z] 8669.50 IOPS, 33.87 MiB/s [2024-12-10T02:55:59.983Z] 8538.67 IOPS, 33.35 MiB/s [2024-12-10T02:56:00.915Z] 8656.00 IOPS, 33.81 MiB/s [2024-12-10T02:56:01.847Z] 8602.20 IOPS, 33.60 MiB/s [2024-12-10T02:56:02.780Z] 8688.67 IOPS, 33.94 MiB/s [2024-12-10T02:56:04.154Z] 8655.29 IOPS, 33.81 MiB/s [2024-12-10T02:56:05.087Z] 8695.12 IOPS, 33.97 MiB/s [2024-12-10T02:56:06.020Z] 8733.11 IOPS, 34.11 MiB/s [2024-12-10T02:56:06.020Z] 8712.10 IOPS, 34.03 MiB/s 00:08:11.631 Latency(us) 00:08:11.631 [2024-12-10T02:56:06.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.631 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:11.631 Verification LBA range: start 0x0 length 0x4000 00:08:11.631 NVMe0n1 : 10.07 8753.51 34.19 0.00 0.00 116476.73 14854.83 71846.87 00:08:11.631 [2024-12-10T02:56:06.020Z] =================================================================================================================== 00:08:11.631 [2024-12-10T02:56:06.020Z] Total : 8753.51 34.19 0.00 0.00 116476.73 14854.83 71846.87 00:08:11.631 { 00:08:11.631 "results": [ 00:08:11.631 { 00:08:11.631 "job": "NVMe0n1", 00:08:11.631 "core_mask": "0x1", 00:08:11.631 "workload": "verify", 00:08:11.631 "status": "finished", 00:08:11.631 "verify_range": { 00:08:11.631 "start": 0, 00:08:11.631 "length": 16384 00:08:11.631 }, 00:08:11.631 "queue_depth": 1024, 00:08:11.631 "io_size": 4096, 00:08:11.631 "runtime": 10.069565, 00:08:11.631 "iops": 8753.506233883985, 00:08:11.631 "mibps": 34.19338372610932, 00:08:11.631 "io_failed": 0, 00:08:11.631 "io_timeout": 0, 00:08:11.631 "avg_latency_us": 116476.72683218707, 00:08:11.631 "min_latency_us": 14854.826666666666, 00:08:11.631 "max_latency_us": 71846.87407407408 00:08:11.631 } 00:08:11.631 ], 00:08:11.631 "core_count": 1 00:08:11.631 } 00:08:11.631 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2306613 00:08:11.631 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2306613 ']' 00:08:11.631 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2306613 00:08:11.631 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:11.631 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.631 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2306613 00:08:11.631 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.631 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.631 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2306613' 00:08:11.631 killing process with pid 2306613 00:08:11.631 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2306613 00:08:11.631 Received shutdown signal, test time was about 10.000000 seconds 00:08:11.631 00:08:11.631 Latency(us) 00:08:11.631 [2024-12-10T02:56:06.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.631 [2024-12-10T02:56:06.020Z] =================================================================================================================== 00:08:11.631 [2024-12-10T02:56:06.020Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:11.631 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2306613 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.889 rmmod nvme_tcp 00:08:11.889 rmmod nvme_fabrics 00:08:11.889 rmmod nvme_keyring 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2306480 ']' 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2306480 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2306480 ']' 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2306480 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2306480 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2306480' 00:08:11.889 killing process with pid 2306480 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2306480 00:08:11.889 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2306480 00:08:12.148 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:12.148 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:12.148 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:12.148 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:12.148 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:12.148 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:12.148 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:12.148 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:12.148 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:12.148 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.148 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.148 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:14.686 00:08:14.686 real 0m16.172s 00:08:14.686 user 0m22.705s 00:08:14.686 sys 0m3.085s 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.686 ************************************ 00:08:14.686 END TEST nvmf_queue_depth 00:08:14.686 ************************************ 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.686 ************************************ 00:08:14.686 START TEST nvmf_target_multipath 00:08:14.686 ************************************ 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:14.686 * Looking for test storage... 00:08:14.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:14.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.686 --rc genhtml_branch_coverage=1 00:08:14.686 --rc genhtml_function_coverage=1 00:08:14.686 --rc genhtml_legend=1 00:08:14.686 --rc geninfo_all_blocks=1 00:08:14.686 --rc geninfo_unexecuted_blocks=1 00:08:14.686 00:08:14.686 ' 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:14.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.686 --rc genhtml_branch_coverage=1 00:08:14.686 --rc genhtml_function_coverage=1 00:08:14.686 --rc genhtml_legend=1 00:08:14.686 --rc geninfo_all_blocks=1 00:08:14.686 --rc geninfo_unexecuted_blocks=1 00:08:14.686 00:08:14.686 ' 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:14.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.686 --rc genhtml_branch_coverage=1 00:08:14.686 --rc genhtml_function_coverage=1 00:08:14.686 --rc genhtml_legend=1 00:08:14.686 --rc geninfo_all_blocks=1 00:08:14.686 --rc geninfo_unexecuted_blocks=1 00:08:14.686 00:08:14.686 ' 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:14.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.686 --rc genhtml_branch_coverage=1 00:08:14.686 --rc genhtml_function_coverage=1 00:08:14.686 --rc genhtml_legend=1 00:08:14.686 --rc geninfo_all_blocks=1 00:08:14.686 --rc geninfo_unexecuted_blocks=1 00:08:14.686 00:08:14.686 ' 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.686 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:14.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:14.687 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:16.592 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:16.592 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:16.593 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:16.593 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:16.593 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.593 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:16.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:08:16.854 00:08:16.854 --- 10.0.0.2 ping statistics --- 00:08:16.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.854 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:08:16.854 00:08:16.854 --- 10.0.0.1 ping statistics --- 00:08:16.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.854 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:16.854 only one NIC for nvmf test 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:16.854 rmmod nvme_tcp 00:08:16.854 rmmod nvme_fabrics 00:08:16.854 rmmod nvme_keyring 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.854 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.392 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:19.393 00:08:19.393 real 0m4.612s 00:08:19.393 user 0m0.991s 00:08:19.393 sys 0m1.638s 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:19.393 ************************************ 00:08:19.393 END TEST nvmf_target_multipath 00:08:19.393 ************************************ 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.393 ************************************ 00:08:19.393 START TEST nvmf_zcopy 00:08:19.393 ************************************ 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:19.393 * Looking for test storage... 00:08:19.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:19.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.393 --rc genhtml_branch_coverage=1 00:08:19.393 --rc genhtml_function_coverage=1 00:08:19.393 --rc genhtml_legend=1 00:08:19.393 --rc geninfo_all_blocks=1 00:08:19.393 --rc geninfo_unexecuted_blocks=1 00:08:19.393 00:08:19.393 ' 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:19.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.393 --rc genhtml_branch_coverage=1 00:08:19.393 --rc genhtml_function_coverage=1 00:08:19.393 --rc genhtml_legend=1 00:08:19.393 --rc geninfo_all_blocks=1 00:08:19.393 --rc geninfo_unexecuted_blocks=1 00:08:19.393 00:08:19.393 ' 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:19.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.393 --rc genhtml_branch_coverage=1 00:08:19.393 --rc genhtml_function_coverage=1 00:08:19.393 --rc genhtml_legend=1 00:08:19.393 --rc geninfo_all_blocks=1 00:08:19.393 --rc geninfo_unexecuted_blocks=1 00:08:19.393 00:08:19.393 ' 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:19.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.393 --rc genhtml_branch_coverage=1 00:08:19.393 --rc genhtml_function_coverage=1 00:08:19.393 --rc genhtml_legend=1 00:08:19.393 --rc geninfo_all_blocks=1 00:08:19.393 --rc geninfo_unexecuted_blocks=1 00:08:19.393 00:08:19.393 ' 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.393 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:19.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:19.394 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:19.394 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:19.394 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:19.394 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:19.394 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:19.394 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.394 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:19.394 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:19.394 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:19.394 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.394 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.394 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.394 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:19.394 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:19.394 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:19.394 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:21.300 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:21.300 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:21.300 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:21.300 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.300 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:21.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:08:21.559 00:08:21.559 --- 10.0.0.2 ping statistics --- 00:08:21.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.559 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:08:21.559 00:08:21.559 --- 10.0.0.1 ping statistics --- 00:08:21.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.559 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2311825 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2311825 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2311825 ']' 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.559 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.559 [2024-12-10 03:56:15.865985] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:21.559 [2024-12-10 03:56:15.866075] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.559 [2024-12-10 03:56:15.937318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.818 [2024-12-10 03:56:15.990542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.818 [2024-12-10 03:56:15.990643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.818 [2024-12-10 03:56:15.990657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.818 [2024-12-10 03:56:15.990668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.818 [2024-12-10 03:56:15.990677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.818 [2024-12-10 03:56:15.991325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.818 [2024-12-10 03:56:16.131241] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.818 [2024-12-10 03:56:16.147431] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:21.818 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.819 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.819 malloc0 00:08:21.819 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.819 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:21.819 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.819 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.819 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.819 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:21.819 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:21.819 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:21.819 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:21.819 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:21.819 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:21.819 { 00:08:21.819 "params": { 00:08:21.819 "name": "Nvme$subsystem", 00:08:21.819 "trtype": "$TEST_TRANSPORT", 00:08:21.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:21.819 "adrfam": "ipv4", 00:08:21.819 "trsvcid": "$NVMF_PORT", 00:08:21.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:21.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:21.819 "hdgst": ${hdgst:-false}, 00:08:21.819 "ddgst": ${ddgst:-false} 00:08:21.819 }, 00:08:21.819 "method": "bdev_nvme_attach_controller" 00:08:21.819 } 00:08:21.819 EOF 00:08:21.819 )") 00:08:21.819 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:21.819 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:21.819 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:21.819 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:21.819 "params": { 00:08:21.819 "name": "Nvme1", 00:08:21.819 "trtype": "tcp", 00:08:21.819 "traddr": "10.0.0.2", 00:08:21.819 "adrfam": "ipv4", 00:08:21.819 "trsvcid": "4420", 00:08:21.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:21.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:21.819 "hdgst": false, 00:08:21.819 "ddgst": false 00:08:21.819 }, 00:08:21.819 "method": "bdev_nvme_attach_controller" 00:08:21.819 }' 00:08:22.077 [2024-12-10 03:56:16.225129] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:22.077 [2024-12-10 03:56:16.225206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311856 ] 00:08:22.077 [2024-12-10 03:56:16.290123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.078 [2024-12-10 03:56:16.349452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.336 Running I/O for 10 seconds... 00:08:24.644 5542.00 IOPS, 43.30 MiB/s [2024-12-10T02:56:19.654Z] 5547.50 IOPS, 43.34 MiB/s [2024-12-10T02:56:20.611Z] 5566.33 IOPS, 43.49 MiB/s [2024-12-10T02:56:21.985Z] 5560.50 IOPS, 43.44 MiB/s [2024-12-10T02:56:22.918Z] 5557.00 IOPS, 43.41 MiB/s [2024-12-10T02:56:23.853Z] 5555.33 IOPS, 43.40 MiB/s [2024-12-10T02:56:24.787Z] 5560.43 IOPS, 43.44 MiB/s [2024-12-10T02:56:25.720Z] 5568.62 IOPS, 43.50 MiB/s [2024-12-10T02:56:26.653Z] 5579.67 IOPS, 43.59 MiB/s [2024-12-10T02:56:26.653Z] 5583.20 IOPS, 43.62 MiB/s 00:08:32.264 Latency(us) 00:08:32.264 [2024-12-10T02:56:26.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.264 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:32.264 Verification LBA range: start 0x0 length 0x1000 00:08:32.264 Nvme1n1 : 10.01 5583.46 43.62 0.00 0.00 22863.82 564.34 30680.56 00:08:32.264 [2024-12-10T02:56:26.653Z] =================================================================================================================== 00:08:32.264 [2024-12-10T02:56:26.653Z] Total : 5583.46 43.62 0.00 0.00 22863.82 564.34 30680.56 00:08:32.554 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2313058 00:08:32.554 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:32.554 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.554 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:32.554 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:32.554 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:32.554 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:32.554 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:32.554 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:32.554 { 00:08:32.554 "params": { 00:08:32.554 "name": "Nvme$subsystem", 00:08:32.554 "trtype": "$TEST_TRANSPORT", 00:08:32.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:32.554 "adrfam": "ipv4", 00:08:32.554 "trsvcid": "$NVMF_PORT", 00:08:32.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:32.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:32.554 "hdgst": ${hdgst:-false}, 00:08:32.554 "ddgst": ${ddgst:-false} 00:08:32.554 }, 00:08:32.554 "method": "bdev_nvme_attach_controller" 00:08:32.554 } 00:08:32.554 EOF 00:08:32.554 )") 00:08:32.554 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:32.554 [2024-12-10 03:56:26.851096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.554 [2024-12-10 03:56:26.851139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.554 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:32.554 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:32.554 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:32.554 "params": { 00:08:32.554 "name": "Nvme1", 00:08:32.554 "trtype": "tcp", 00:08:32.554 "traddr": "10.0.0.2", 00:08:32.554 "adrfam": "ipv4", 00:08:32.554 "trsvcid": "4420", 00:08:32.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:32.554 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:32.554 "hdgst": false, 00:08:32.554 "ddgst": false 00:08:32.554 }, 00:08:32.554 "method": "bdev_nvme_attach_controller" 00:08:32.554 }' 00:08:32.554 [2024-12-10 03:56:26.859046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.554 [2024-12-10 03:56:26.859067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.554 [2024-12-10 03:56:26.867066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.554 [2024-12-10 03:56:26.867086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.554 [2024-12-10 03:56:26.875087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.554 [2024-12-10 03:56:26.875107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.554 [2024-12-10 03:56:26.883108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.554 [2024-12-10 03:56:26.883128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.554 [2024-12-10 03:56:26.889497] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:32.554 [2024-12-10 03:56:26.889580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313058 ] 00:08:32.555 [2024-12-10 03:56:26.891128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.555 [2024-12-10 03:56:26.891147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.555 [2024-12-10 03:56:26.899150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.555 [2024-12-10 03:56:26.899169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.555 [2024-12-10 03:56:26.907173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.555 [2024-12-10 03:56:26.907202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.555 [2024-12-10 03:56:26.915192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.555 [2024-12-10 03:56:26.915212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.555 [2024-12-10 03:56:26.923212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.555 [2024-12-10 03:56:26.923232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.555 [2024-12-10 03:56:26.931236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.555 [2024-12-10 03:56:26.931257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:26.939256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:26.939277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:26.947277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:26.947297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:26.955298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:26.955317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:26.960765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.813 [2024-12-10 03:56:26.963321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:26.963340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:26.971393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:26.971434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:26.979386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:26.979415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:26.987386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:26.987406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:26.995408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:26.995429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:27.003430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:27.003450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:27.011451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:27.011471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:27.019472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:27.019492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:27.020308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.813 [2024-12-10 03:56:27.027494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:27.027514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:27.035567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:27.035597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:27.043608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:27.043647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:27.051639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:27.051685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:27.059670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:27.059707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:27.067674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:27.067719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:27.075692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:27.075741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:27.083717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:27.083755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:27.091689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:27.091712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:27.099743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:27.099781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:27.107767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:27.107806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:27.115798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.813 [2024-12-10 03:56:27.115854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.813 [2024-12-10 03:56:27.123771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.814 [2024-12-10 03:56:27.123792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.814 [2024-12-10 03:56:27.131794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.814 [2024-12-10 03:56:27.131816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.814 [2024-12-10 03:56:27.139836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.814 [2024-12-10 03:56:27.139859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.814 [2024-12-10 03:56:27.147858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.814 [2024-12-10 03:56:27.147896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.814 [2024-12-10 03:56:27.155876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.814 [2024-12-10 03:56:27.155897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.814 [2024-12-10 03:56:27.163897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.814 [2024-12-10 03:56:27.163920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.814 [2024-12-10 03:56:27.171925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.814 [2024-12-10 03:56:27.171947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.814 [2024-12-10 03:56:27.179949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.814 [2024-12-10 03:56:27.179972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.814 [2024-12-10 03:56:27.187969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.814 [2024-12-10 03:56:27.187990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.814 [2024-12-10 03:56:27.195976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.814 [2024-12-10 03:56:27.195996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.204016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.204048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.212034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.212054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 Running I/O for 5 seconds... 00:08:33.072 [2024-12-10 03:56:27.223754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.223782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.233444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.233473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.245105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.245132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.256391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.256417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.267642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.267671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.278344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.278370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.289762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.289790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.300640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.300669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.313346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.313373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.325205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.325230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.334319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.334344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.346084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.346110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.356746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.356774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.367414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.367439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.378034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.378061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.388685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.388723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.399419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.399444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.410636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.410663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.421626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.421653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.432486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.432512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.072 [2024-12-10 03:56:27.445196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.072 [2024-12-10 03:56:27.445221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.455644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.455672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.466012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.466038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.477033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.477058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.489285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.489311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.499159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.499184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.510338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.510363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.521127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.521152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.532023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.532050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.544448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.544475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.553745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.553790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.565169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.565194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.576031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.576057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.587132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.587158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.600018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.600044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.610128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.610154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.620792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.620821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.634509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.634559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.644765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.644793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.655379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.655405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.666082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.666108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.676734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.676762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.687522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.687575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.700187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.700213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.331 [2024-12-10 03:56:27.710297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.331 [2024-12-10 03:56:27.710325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.721719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.721747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.732431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.732457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.743449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.743475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.756180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.756207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.766264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.766291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.776388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.776415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.786844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.786871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.797576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.797612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.809947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.809974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.819884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.819920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.830682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.830709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.841191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.841218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.851758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.851786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.862580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.862608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.875071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.875098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.885450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.885478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.895707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.895734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.906301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.906327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.918851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.918877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.928452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.928479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.939681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.939709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.950390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.950417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.960807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.960834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.590 [2024-12-10 03:56:27.971091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.590 [2024-12-10 03:56:27.971118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:27.981334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:27.981360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:27.991641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:27.991669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.002615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.002643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.015425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.015451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.025920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.025970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.036953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.036979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.047541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.047604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.058575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.058616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.071565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.071593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.082182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.082209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.093479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.093505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.106284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.106310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.116242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.116269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.126846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.126873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.137059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.137085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.148060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.148085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.161042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.161068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.171240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.171266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.182085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.182112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.193046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.193073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.204481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.204507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 [2024-12-10 03:56:28.215169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.215195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.849 11730.00 IOPS, 91.64 MiB/s [2024-12-10T02:56:28.238Z] [2024-12-10 03:56:28.226204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.849 [2024-12-10 03:56:28.226244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.107 [2024-12-10 03:56:28.238731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.107 [2024-12-10 03:56:28.238768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.107 [2024-12-10 03:56:28.248971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.107 [2024-12-10 03:56:28.248996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.107 [2024-12-10 03:56:28.260009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.107 [2024-12-10 03:56:28.260035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.107 [2024-12-10 03:56:28.270571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.107 [2024-12-10 03:56:28.270612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.107 [2024-12-10 03:56:28.281211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.107 [2024-12-10 03:56:28.281236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.292015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.292040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.302840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.302866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.315323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.315348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.325649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.325675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.335751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.335778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.346538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.346572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.356819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.356860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.367654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.367697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.378288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.378314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.389090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.389116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.400109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.400135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.411043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.411083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.423751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.423779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.433744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.433772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.444713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.444740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.457565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.457592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.467744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.467772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.478123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.478149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.108 [2024-12-10 03:56:28.488399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.108 [2024-12-10 03:56:28.488425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.498929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.498955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.509390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.509416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.520511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.520559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.533130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.533156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.542955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.542981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.553715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.553757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.564411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.564437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.576576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.576617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.586456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.586482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.598027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.598053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.608723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.608750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.619316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.619341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.629958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.629984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.640848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.640873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.651889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.651929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.664202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.664227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.674477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.674503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.685471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.685496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.697994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.698019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.708044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.708070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.719300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.719325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.731759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.731786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.367 [2024-12-10 03:56:28.742231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.367 [2024-12-10 03:56:28.742256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.753131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.753159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.764280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.764306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.775325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.775350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.786166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.786192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.797775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.797802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.808506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.808555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.819178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.819203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.830502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.830543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.841285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.841310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.852253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.852279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.862775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.862802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.873709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.873737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.884443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.884469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.897698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.897740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.908215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.908241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.918961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.918987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.931764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.931792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.941344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.941369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.954333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.954359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.964750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.964777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.975307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.975333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.986295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.986321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.627 [2024-12-10 03:56:28.997172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.627 [2024-12-10 03:56:28.997198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.009950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.009977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.020246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.020271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.030995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.031020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.043828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.043869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.054149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.054174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.065100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.065125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.077904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.077930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.087811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.087852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.098482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.098508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.111393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.111419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.122801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.122843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.131783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.131810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.143870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.143911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.154448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.154474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.165354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.165380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.177576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.177603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.186867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.186909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.197986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.198012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.208587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.208616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 11776.50 IOPS, 92.00 MiB/s [2024-12-10T02:56:29.275Z] [2024-12-10 03:56:29.219211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.219253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.230328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.230354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.241087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.241113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.253410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.253435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.886 [2024-12-10 03:56:29.263940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.886 [2024-12-10 03:56:29.263970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.275024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.275058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.287561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.287600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.297335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.297361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.308006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.308032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.318793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.318820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.329647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.329675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.340252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.340278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.351193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.351218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.363984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.364009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.374021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.374047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.384372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.384399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.395339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.395364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.406225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.406251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.417038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.417064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.429914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.429940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.440253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.440279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.450598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.450639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.461650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.461677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.472510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.472561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.482924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.482958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.493323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.493349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.503793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.503821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.514375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.514402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.145 [2024-12-10 03:56:29.525192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.145 [2024-12-10 03:56:29.525218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.403 [2024-12-10 03:56:29.535908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.403 [2024-12-10 03:56:29.535934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.403 [2024-12-10 03:56:29.546720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.403 [2024-12-10 03:56:29.546747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.403 [2024-12-10 03:56:29.558865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.403 [2024-12-10 03:56:29.558907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.403 [2024-12-10 03:56:29.569281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.403 [2024-12-10 03:56:29.569307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.403 [2024-12-10 03:56:29.579915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.403 [2024-12-10 03:56:29.579941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.403 [2024-12-10 03:56:29.592056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.403 [2024-12-10 03:56:29.592082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.403 [2024-12-10 03:56:29.601966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.403 [2024-12-10 03:56:29.601993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.403 [2024-12-10 03:56:29.612941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.403 [2024-12-10 03:56:29.612968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.403 [2024-12-10 03:56:29.626070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.403 [2024-12-10 03:56:29.626095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.403 [2024-12-10 03:56:29.637705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.403 [2024-12-10 03:56:29.637733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.403 [2024-12-10 03:56:29.647139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.403 [2024-12-10 03:56:29.647165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.403 [2024-12-10 03:56:29.657945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.403 [2024-12-10 03:56:29.657971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.403 [2024-12-10 03:56:29.670974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.403 [2024-12-10 03:56:29.671000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.403 [2024-12-10 03:56:29.681229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.403 [2024-12-10 03:56:29.681254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.403 [2024-12-10 03:56:29.692094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.403 [2024-12-10 03:56:29.692127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.403 [2024-12-10 03:56:29.704292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.403 [2024-12-10 03:56:29.704318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.404 [2024-12-10 03:56:29.714402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.404 [2024-12-10 03:56:29.714428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.404 [2024-12-10 03:56:29.724996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.404 [2024-12-10 03:56:29.725022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.404 [2024-12-10 03:56:29.737443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.404 [2024-12-10 03:56:29.737469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.404 [2024-12-10 03:56:29.748001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.404 [2024-12-10 03:56:29.748027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.404 [2024-12-10 03:56:29.758510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.404 [2024-12-10 03:56:29.758559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.404 [2024-12-10 03:56:29.769433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.404 [2024-12-10 03:56:29.769458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.404 [2024-12-10 03:56:29.780035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.404 [2024-12-10 03:56:29.780061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.792318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.792344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.802386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.802412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.812826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.812867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.823701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.823729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.834629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.834655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.845198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.845223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.857633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.857661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.867607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.867636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.878837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.878863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.889621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.889649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.900208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.900233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.910989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.911015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.921771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.921798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.934481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.934508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.944538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.944573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.955163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.955189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.965789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.965832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.978656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.978683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:29.990893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:29.990919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:30.000163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:30.000189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:30.011868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:30.011908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:30.022792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:30.022831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.662 [2024-12-10 03:56:30.033646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.662 [2024-12-10 03:56:30.033675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.920 [2024-12-10 03:56:30.046007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.920 [2024-12-10 03:56:30.046034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.920 [2024-12-10 03:56:30.056946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.920 [2024-12-10 03:56:30.056979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.920 [2024-12-10 03:56:30.068084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.920 [2024-12-10 03:56:30.068111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.920 [2024-12-10 03:56:30.081507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.920 [2024-12-10 03:56:30.081555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.920 [2024-12-10 03:56:30.092452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.920 [2024-12-10 03:56:30.092478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.920 [2024-12-10 03:56:30.103748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.920 [2024-12-10 03:56:30.103776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.920 [2024-12-10 03:56:30.114852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.920 [2024-12-10 03:56:30.114880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.920 [2024-12-10 03:56:30.126238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.920 [2024-12-10 03:56:30.126264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.920 [2024-12-10 03:56:30.137734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.920 [2024-12-10 03:56:30.137762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.920 [2024-12-10 03:56:30.148648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.920 [2024-12-10 03:56:30.148675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.920 [2024-12-10 03:56:30.159812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.920 [2024-12-10 03:56:30.159839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.920 [2024-12-10 03:56:30.170860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.920 [2024-12-10 03:56:30.170887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.920 [2024-12-10 03:56:30.181861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.920 [2024-12-10 03:56:30.181890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.920 [2024-12-10 03:56:30.194343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.920 [2024-12-10 03:56:30.194368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.920 [2024-12-10 03:56:30.204841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.920 [2024-12-10 03:56:30.204868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.920 [2024-12-10 03:56:30.215798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.921 [2024-12-10 03:56:30.215825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.921 11778.67 IOPS, 92.02 MiB/s [2024-12-10T02:56:30.310Z] [2024-12-10 03:56:30.228968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.921 [2024-12-10 03:56:30.228996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.921 [2024-12-10 03:56:30.239605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.921 [2024-12-10 03:56:30.239640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.921 [2024-12-10 03:56:30.250021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.921 [2024-12-10 03:56:30.250048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.921 [2024-12-10 03:56:30.260821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.921 [2024-12-10 03:56:30.260862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.921 [2024-12-10 03:56:30.271716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.921 [2024-12-10 03:56:30.271743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.921 [2024-12-10 03:56:30.285415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.921 [2024-12-10 03:56:30.285443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.921 [2024-12-10 03:56:30.295733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.921 [2024-12-10 03:56:30.295760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.306361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.306387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.317020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.317048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.327995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.328023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.340395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.340423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.349696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.349724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.360689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.360716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.371488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.371514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.382096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.382122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.392823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.392850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.406321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.406347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.416818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.416845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.427443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.427470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.438224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.438250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.451002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.451028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.461281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.461307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.472321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.472347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.483394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.483421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.493919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.493945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.504883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.504924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.518591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.518618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.530699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.530734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.540034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.540059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.179 [2024-12-10 03:56:30.554069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.179 [2024-12-10 03:56:30.554096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.564320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.564346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.574900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.574926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.585942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.585967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.598769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.598797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.608918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.608943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.619801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.619843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.630816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.630843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.642877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.642918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.652478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.652503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.663342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.663367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.676359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.676385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.686692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.686719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.697382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.697407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.708374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.708399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.718871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.718912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.729837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.729865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.740838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.740891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.753178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.753204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.763140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.763165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.774055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.774081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.784936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.784961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.795587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.795614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.438 [2024-12-10 03:56:30.808258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.438 [2024-12-10 03:56:30.808284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.439 [2024-12-10 03:56:30.818169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.439 [2024-12-10 03:56:30.818194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:30.829236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:30.829262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:30.839589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:30.839617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:30.850303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:30.850330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:30.860753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:30.860781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:30.871554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:30.871580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:30.882325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:30.882350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:30.893069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:30.893094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:30.905761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:30.905788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:30.916136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:30.916161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:30.926916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:30.926942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:30.937555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:30.937597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:30.948112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:30.948145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:30.958458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:30.958498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:30.969393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:30.969420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:30.982003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:30.982029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:30.991473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:30.991498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:31.003023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:31.003049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:31.015621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:31.015649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:31.025849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:31.025876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:31.035985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:31.036011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:31.046845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:31.046872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:31.059749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:31.059777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.697 [2024-12-10 03:56:31.070049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.697 [2024-12-10 03:56:31.070074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.080665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.080694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.091563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.091602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.104342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.104368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.114645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.114688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.125665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.125692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.138111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.138137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.148393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.148418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.159367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.159401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.170126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.170152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.181006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.181031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.194036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.194062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.204356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.204382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.214986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.215012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 11794.00 IOPS, 92.14 MiB/s [2024-12-10T02:56:31.345Z] [2024-12-10 03:56:31.225826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.225867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.236813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.236867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.249595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.249624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.259542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.259579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.270504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.270551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.283068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.283093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.293186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.293213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.304091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.304116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.316615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.316642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.326541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.326576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.956 [2024-12-10 03:56:31.337238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.956 [2024-12-10 03:56:31.337267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.349822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.349864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.359944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.359969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.370673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.370699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.381539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.381574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.394424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.394451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.403582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.403609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.414956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.414982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.425874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.425900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.436749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.436777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.449374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.449400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.459788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.459815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.470336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.470362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.481260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.481287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.492073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.492115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.503112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.503138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.515641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.515669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.525947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.525973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.536923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.536950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.547460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.547486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.558073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.558100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.568606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.568634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.580220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.580246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.215 [2024-12-10 03:56:31.590702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.215 [2024-12-10 03:56:31.590729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.473 [2024-12-10 03:56:31.601627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.473 [2024-12-10 03:56:31.601654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.473 [2024-12-10 03:56:31.614422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.473 [2024-12-10 03:56:31.614447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.473 [2024-12-10 03:56:31.624922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.473 [2024-12-10 03:56:31.624947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.473 [2024-12-10 03:56:31.635563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.473 [2024-12-10 03:56:31.635590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.473 [2024-12-10 03:56:31.648310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.473 [2024-12-10 03:56:31.648335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.473 [2024-12-10 03:56:31.658288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.473 [2024-12-10 03:56:31.658313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.473 [2024-12-10 03:56:31.669214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.473 [2024-12-10 03:56:31.669240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.473 [2024-12-10 03:56:31.682030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.473 [2024-12-10 03:56:31.682055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.473 [2024-12-10 03:56:31.691943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.473 [2024-12-10 03:56:31.691969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.473 [2024-12-10 03:56:31.702726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.473 [2024-12-10 03:56:31.702753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.473 [2024-12-10 03:56:31.715688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.473 [2024-12-10 03:56:31.715715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.473 [2024-12-10 03:56:31.727649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.474 [2024-12-10 03:56:31.727677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.474 [2024-12-10 03:56:31.736713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.474 [2024-12-10 03:56:31.736741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.474 [2024-12-10 03:56:31.748430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.474 [2024-12-10 03:56:31.748456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.474 [2024-12-10 03:56:31.759089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.474 [2024-12-10 03:56:31.759114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.474 [2024-12-10 03:56:31.769910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.474 [2024-12-10 03:56:31.769935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.474 [2024-12-10 03:56:31.782534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.474 [2024-12-10 03:56:31.782578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.474 [2024-12-10 03:56:31.792303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.474 [2024-12-10 03:56:31.792329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.474 [2024-12-10 03:56:31.803322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.474 [2024-12-10 03:56:31.803363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.474 [2024-12-10 03:56:31.813980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.474 [2024-12-10 03:56:31.814005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.474 [2024-12-10 03:56:31.824838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.474 [2024-12-10 03:56:31.824879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.474 [2024-12-10 03:56:31.837245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.474 [2024-12-10 03:56:31.837270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.474 [2024-12-10 03:56:31.847228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.474 [2024-12-10 03:56:31.847254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:31.858063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:31.858105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:31.869116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:31.869142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:31.880209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:31.880235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:31.891213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:31.891238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:31.901675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:31.901702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:31.912406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:31.912431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:31.923157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:31.923182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:31.934012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:31.934038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:31.944929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:31.944955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:31.955941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:31.955967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:31.966690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:31.966718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:31.979358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:31.979384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:31.989615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:31.989652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:32.000190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:32.000215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:32.010737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:32.010764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:32.021095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:32.021121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:32.031431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:32.031456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:32.042124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:32.042150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:32.052909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:32.052935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:32.063740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:32.063782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:32.076428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:32.076454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:32.088309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:32.088334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:32.097733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:32.097776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.732 [2024-12-10 03:56:32.109428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.732 [2024-12-10 03:56:32.109454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.990 [2024-12-10 03:56:32.119929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.990 [2024-12-10 03:56:32.119954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.990 [2024-12-10 03:56:32.135112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.990 [2024-12-10 03:56:32.135140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.145493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.145520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.156311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.156337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.166945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.166971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.178100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.178125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.190786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.190813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.200706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.200741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.212057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.212082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.222736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.222764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 11792.60 IOPS, 92.13 MiB/s [2024-12-10T02:56:32.380Z] [2024-12-10 03:56:32.232228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.232253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 00:08:37.991 Latency(us) 00:08:37.991 [2024-12-10T02:56:32.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.991 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:37.991 Nvme1n1 : 5.01 11799.32 92.18 0.00 0.00 10834.61 4757.43 19612.25 00:08:37.991 [2024-12-10T02:56:32.380Z] =================================================================================================================== 00:08:37.991 [2024-12-10T02:56:32.380Z] Total : 11799.32 92.18 0.00 0.00 10834.61 4757.43 19612.25 00:08:37.991 [2024-12-10 03:56:32.237209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.237232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.245238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.245265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.253246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.253267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.261347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.261395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.269371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.269423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.277400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.277454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.285412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.285462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.293430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.293480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.301462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.301515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.309477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.309527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.317487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.317532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.325520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.325581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.333537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.333606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.341572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.341628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.349595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.349646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.357612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.357662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.991 [2024-12-10 03:56:32.365635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.991 [2024-12-10 03:56:32.365682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.249 [2024-12-10 03:56:32.373652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.249 [2024-12-10 03:56:32.373697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.249 [2024-12-10 03:56:32.381682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.249 [2024-12-10 03:56:32.381715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.249 [2024-12-10 03:56:32.389627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.249 [2024-12-10 03:56:32.389649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.249 [2024-12-10 03:56:32.397649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.249 [2024-12-10 03:56:32.397671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.249 [2024-12-10 03:56:32.405675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.249 [2024-12-10 03:56:32.405696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.249 [2024-12-10 03:56:32.413696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.249 [2024-12-10 03:56:32.413718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.249 [2024-12-10 03:56:32.421796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.249 [2024-12-10 03:56:32.421847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.249 [2024-12-10 03:56:32.429798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.249 [2024-12-10 03:56:32.429845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.249 [2024-12-10 03:56:32.437769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.249 [2024-12-10 03:56:32.437793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.249 [2024-12-10 03:56:32.445776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.249 [2024-12-10 03:56:32.445796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.249 [2024-12-10 03:56:32.453801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.249 [2024-12-10 03:56:32.453835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2313058) - No such process 00:08:38.249 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2313058 00:08:38.249 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.249 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.249 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:38.249 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.249 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:38.249 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.249 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:38.249 delay0 00:08:38.249 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.249 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:38.249 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.249 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:38.249 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.249 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:38.250 [2024-12-10 03:56:32.579477] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:44.807 [2024-12-10 03:56:38.675718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f260 is same with the state(6) to be set 00:08:44.807 [2024-12-10 03:56:38.675776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f260 is same with the state(6) to be set 00:08:44.807 Initializing NVMe Controllers 00:08:44.807 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:44.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:44.807 Initialization complete. Launching workers. 00:08:44.808 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 66 00:08:44.808 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 353, failed to submit 33 00:08:44.808 success 189, unsuccessful 164, failed 0 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:44.808 rmmod nvme_tcp 00:08:44.808 rmmod nvme_fabrics 00:08:44.808 rmmod nvme_keyring 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2311825 ']' 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2311825 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2311825 ']' 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2311825 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2311825 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2311825' 00:08:44.808 killing process with pid 2311825 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2311825 00:08:44.808 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2311825 00:08:44.808 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.808 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:44.808 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:44.808 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:44.808 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:44.808 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:44.808 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.808 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.808 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:44.808 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.808 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.808 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.713 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:46.713 00:08:46.713 real 0m27.819s 00:08:46.713 user 0m39.952s 00:08:46.713 sys 0m8.545s 00:08:46.713 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.713 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.713 ************************************ 00:08:46.714 END TEST nvmf_zcopy 00:08:46.714 ************************************ 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.972 ************************************ 00:08:46.972 START TEST nvmf_nmic 00:08:46.972 ************************************ 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:46.972 * Looking for test storage... 00:08:46.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:46.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.972 --rc genhtml_branch_coverage=1 00:08:46.972 --rc genhtml_function_coverage=1 00:08:46.972 --rc genhtml_legend=1 00:08:46.972 --rc geninfo_all_blocks=1 00:08:46.972 --rc geninfo_unexecuted_blocks=1 00:08:46.972 00:08:46.972 ' 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:46.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.972 --rc genhtml_branch_coverage=1 00:08:46.972 --rc genhtml_function_coverage=1 00:08:46.972 --rc genhtml_legend=1 00:08:46.972 --rc geninfo_all_blocks=1 00:08:46.972 --rc geninfo_unexecuted_blocks=1 00:08:46.972 00:08:46.972 ' 00:08:46.972 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:46.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.972 --rc genhtml_branch_coverage=1 00:08:46.972 --rc genhtml_function_coverage=1 00:08:46.972 --rc genhtml_legend=1 00:08:46.973 --rc geninfo_all_blocks=1 00:08:46.973 --rc geninfo_unexecuted_blocks=1 00:08:46.973 00:08:46.973 ' 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:46.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.973 --rc genhtml_branch_coverage=1 00:08:46.973 --rc genhtml_function_coverage=1 00:08:46.973 --rc genhtml_legend=1 00:08:46.973 --rc geninfo_all_blocks=1 00:08:46.973 --rc geninfo_unexecuted_blocks=1 00:08:46.973 00:08:46.973 ' 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:46.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:46.973 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:49.509 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:49.509 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:49.509 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.509 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:49.510 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:49.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:08:49.510 00:08:49.510 --- 10.0.0.2 ping statistics --- 00:08:49.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.510 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:08:49.510 00:08:49.510 --- 10.0.0.1 ping statistics --- 00:08:49.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.510 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2316455 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2316455 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2316455 ']' 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.510 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.510 [2024-12-10 03:56:43.692184] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:49.510 [2024-12-10 03:56:43.692282] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.510 [2024-12-10 03:56:43.770647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.510 [2024-12-10 03:56:43.830540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.510 [2024-12-10 03:56:43.830625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.510 [2024-12-10 03:56:43.830643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.510 [2024-12-10 03:56:43.830670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.510 [2024-12-10 03:56:43.830680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.510 [2024-12-10 03:56:43.832346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.510 [2024-12-10 03:56:43.832406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.510 [2024-12-10 03:56:43.832427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.510 [2024-12-10 03:56:43.832430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.769 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.769 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:49.769 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.769 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.769 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.769 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.769 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.769 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.769 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.769 [2024-12-10 03:56:43.982591] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.769 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.769 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:49.769 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.769 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.769 Malloc0 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.769 [2024-12-10 03:56:44.057073] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:49.769 test case1: single bdev can't be used in multiple subsystems 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.769 [2024-12-10 03:56:44.080923] bdev.c:8511:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:49.769 [2024-12-10 03:56:44.080952] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:49.769 [2024-12-10 03:56:44.080983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.769 request: 00:08:49.769 { 00:08:49.769 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:49.769 "namespace": { 00:08:49.769 "bdev_name": "Malloc0", 00:08:49.769 "no_auto_visible": false, 00:08:49.769 "hide_metadata": false 00:08:49.769 }, 00:08:49.769 "method": "nvmf_subsystem_add_ns", 00:08:49.769 "req_id": 1 00:08:49.769 } 00:08:49.769 Got JSON-RPC error response 00:08:49.769 response: 00:08:49.769 { 00:08:49.769 "code": -32602, 00:08:49.769 "message": "Invalid parameters" 00:08:49.769 } 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:49.769 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:49.770 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:49.770 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:49.770 Adding namespace failed - expected result. 00:08:49.770 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:49.770 test case2: host connect to nvmf target in multiple paths 00:08:49.770 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:49.770 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.770 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.770 [2024-12-10 03:56:44.089053] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:49.770 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.770 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:50.335 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:51.268 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:51.268 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:51.268 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:51.268 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:51.268 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:53.165 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:53.166 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:53.166 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:53.166 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:53.166 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:53.166 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:53.166 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:53.166 [global] 00:08:53.166 thread=1 00:08:53.166 invalidate=1 00:08:53.166 rw=write 00:08:53.166 time_based=1 00:08:53.166 runtime=1 00:08:53.166 ioengine=libaio 00:08:53.166 direct=1 00:08:53.166 bs=4096 00:08:53.166 iodepth=1 00:08:53.166 norandommap=0 00:08:53.166 numjobs=1 00:08:53.166 00:08:53.166 verify_dump=1 00:08:53.166 verify_backlog=512 00:08:53.166 verify_state_save=0 00:08:53.166 do_verify=1 00:08:53.166 verify=crc32c-intel 00:08:53.166 [job0] 00:08:53.166 filename=/dev/nvme0n1 00:08:53.166 Could not set queue depth (nvme0n1) 00:08:53.424 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:53.424 fio-3.35 00:08:53.424 Starting 1 thread 00:08:54.797 00:08:54.797 job0: (groupid=0, jobs=1): err= 0: pid=2317093: Tue Dec 10 03:56:48 2024 00:08:54.797 read: IOPS=426, BW=1706KiB/s (1747kB/s)(1708KiB/1001msec) 00:08:54.797 slat (nsec): min=6828, max=33936, avg=10640.00, stdev=5037.57 00:08:54.797 clat (usec): min=192, max=42065, avg=2074.37, stdev=8573.50 00:08:54.797 lat (usec): min=200, max=42083, avg=2085.01, stdev=8576.24 00:08:54.797 clat percentiles (usec): 00:08:54.797 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 208], 00:08:54.797 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:08:54.797 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 269], 95.00th=[ 302], 00:08:54.797 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:54.797 | 99.99th=[42206] 00:08:54.797 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:08:54.797 slat (usec): min=7, max=29394, avg=66.49, stdev=1298.68 00:08:54.797 clat (usec): min=122, max=240, avg=142.38, stdev=11.84 00:08:54.797 lat (usec): min=131, max=29572, avg=208.87, stdev=1300.33 00:08:54.797 clat percentiles (usec): 00:08:54.797 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 135], 00:08:54.797 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:08:54.797 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 163], 00:08:54.797 | 99.00th=[ 180], 99.50th=[ 208], 99.90th=[ 241], 99.95th=[ 241], 00:08:54.797 | 99.99th=[ 241] 00:08:54.797 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:54.797 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:54.797 lat (usec) : 250=92.44%, 500=5.54% 00:08:54.797 lat (msec) : 50=2.02% 00:08:54.797 cpu : usr=0.80%, sys=0.90%, ctx=941, majf=0, minf=1 00:08:54.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:54.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.797 issued rwts: total=427,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:54.797 00:08:54.797 Run status group 0 (all jobs): 00:08:54.797 READ: bw=1706KiB/s (1747kB/s), 1706KiB/s-1706KiB/s (1747kB/s-1747kB/s), io=1708KiB (1749kB), run=1001-1001msec 00:08:54.797 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:08:54.797 00:08:54.797 Disk stats (read/write): 00:08:54.797 nvme0n1: ios=43/512, merge=0/0, ticks=1710/73, in_queue=1783, util=98.70% 00:08:54.797 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:54.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:54.797 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:54.797 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:54.797 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:54.797 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.797 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:54.797 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.797 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:54.797 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:54.797 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:54.797 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.797 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:54.797 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.797 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:54.797 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.797 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.797 rmmod nvme_tcp 00:08:54.797 rmmod nvme_fabrics 00:08:54.797 rmmod nvme_keyring 00:08:54.797 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.797 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:54.797 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:54.797 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2316455 ']' 00:08:54.797 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2316455 00:08:54.797 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2316455 ']' 00:08:54.797 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2316455 00:08:54.797 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:54.797 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.797 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2316455 00:08:54.797 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.797 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.797 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2316455' 00:08:54.797 killing process with pid 2316455 00:08:54.797 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2316455 00:08:54.797 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2316455 00:08:55.057 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:55.057 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:55.057 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:55.057 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:55.057 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:55.057 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:55.057 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:55.057 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:55.057 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:55.057 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.057 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.057 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.996 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:56.996 00:08:56.996 real 0m10.196s 00:08:56.996 user 0m22.887s 00:08:56.996 sys 0m2.408s 00:08:56.996 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.996 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:56.996 ************************************ 00:08:56.996 END TEST nvmf_nmic 00:08:56.996 ************************************ 00:08:56.996 03:56:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:56.996 03:56:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:56.996 03:56:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.996 03:56:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.255 ************************************ 00:08:57.255 START TEST nvmf_fio_target 00:08:57.255 ************************************ 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:57.255 * Looking for test storage... 00:08:57.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:57.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.255 --rc genhtml_branch_coverage=1 00:08:57.255 --rc genhtml_function_coverage=1 00:08:57.255 --rc genhtml_legend=1 00:08:57.255 --rc geninfo_all_blocks=1 00:08:57.255 --rc geninfo_unexecuted_blocks=1 00:08:57.255 00:08:57.255 ' 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:57.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.255 --rc genhtml_branch_coverage=1 00:08:57.255 --rc genhtml_function_coverage=1 00:08:57.255 --rc genhtml_legend=1 00:08:57.255 --rc geninfo_all_blocks=1 00:08:57.255 --rc geninfo_unexecuted_blocks=1 00:08:57.255 00:08:57.255 ' 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:57.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.255 --rc genhtml_branch_coverage=1 00:08:57.255 --rc genhtml_function_coverage=1 00:08:57.255 --rc genhtml_legend=1 00:08:57.255 --rc geninfo_all_blocks=1 00:08:57.255 --rc geninfo_unexecuted_blocks=1 00:08:57.255 00:08:57.255 ' 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:57.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.255 --rc genhtml_branch_coverage=1 00:08:57.255 --rc genhtml_function_coverage=1 00:08:57.255 --rc genhtml_legend=1 00:08:57.255 --rc geninfo_all_blocks=1 00:08:57.255 --rc geninfo_unexecuted_blocks=1 00:08:57.255 00:08:57.255 ' 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:57.255 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:57.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:57.256 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:59.791 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:59.791 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.791 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:59.792 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:59.792 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:59.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:08:59.792 00:08:59.792 --- 10.0.0.2 ping statistics --- 00:08:59.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.792 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:08:59.792 00:08:59.792 --- 10.0.0.1 ping statistics --- 00:08:59.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.792 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2319181 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2319181 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2319181 ']' 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.792 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.792 [2024-12-10 03:56:54.026044] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:59.792 [2024-12-10 03:56:54.026125] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.792 [2024-12-10 03:56:54.100708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.792 [2024-12-10 03:56:54.158624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.792 [2024-12-10 03:56:54.158680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.792 [2024-12-10 03:56:54.158709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.792 [2024-12-10 03:56:54.158720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.792 [2024-12-10 03:56:54.158730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.792 [2024-12-10 03:56:54.160353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.792 [2024-12-10 03:56:54.160412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.792 [2024-12-10 03:56:54.160478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.792 [2024-12-10 03:56:54.160483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.051 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.051 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:00.051 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:00.051 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:00.051 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.051 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.051 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:00.308 [2024-12-10 03:56:54.565676] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.308 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.566 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:00.566 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:01.134 03:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:01.134 03:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:01.391 03:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:01.391 03:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:01.649 03:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:01.649 03:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:01.907 03:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.165 03:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:02.165 03:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.423 03:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:02.423 03:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.681 03:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:02.681 03:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:02.939 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:03.197 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:03.197 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.454 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:03.454 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:03.712 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.969 [2024-12-10 03:56:58.263939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.969 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:04.227 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:04.485 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.418 03:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:05.418 03:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:05.418 03:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.418 03:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:05.418 03:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:05.418 03:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:07.316 03:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:07.316 03:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:07.316 03:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.316 03:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:07.316 03:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.316 03:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:07.316 03:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:07.316 [global] 00:09:07.316 thread=1 00:09:07.316 invalidate=1 00:09:07.316 rw=write 00:09:07.316 time_based=1 00:09:07.316 runtime=1 00:09:07.316 ioengine=libaio 00:09:07.316 direct=1 00:09:07.316 bs=4096 00:09:07.316 iodepth=1 00:09:07.316 norandommap=0 00:09:07.316 numjobs=1 00:09:07.316 00:09:07.316 verify_dump=1 00:09:07.316 verify_backlog=512 00:09:07.316 verify_state_save=0 00:09:07.316 do_verify=1 00:09:07.316 verify=crc32c-intel 00:09:07.316 [job0] 00:09:07.316 filename=/dev/nvme0n1 00:09:07.316 [job1] 00:09:07.316 filename=/dev/nvme0n2 00:09:07.316 [job2] 00:09:07.316 filename=/dev/nvme0n3 00:09:07.316 [job3] 00:09:07.316 filename=/dev/nvme0n4 00:09:07.316 Could not set queue depth (nvme0n1) 00:09:07.316 Could not set queue depth (nvme0n2) 00:09:07.316 Could not set queue depth (nvme0n3) 00:09:07.316 Could not set queue depth (nvme0n4) 00:09:07.580 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.580 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.580 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.580 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.580 fio-3.35 00:09:07.580 Starting 4 threads 00:09:08.954 00:09:08.954 job0: (groupid=0, jobs=1): err= 0: pid=2320372: Tue Dec 10 03:57:02 2024 00:09:08.954 read: IOPS=1257, BW=5031KiB/s (5152kB/s)(5036KiB/1001msec) 00:09:08.954 slat (nsec): min=5449, max=59877, avg=18427.15, stdev=9917.46 00:09:08.954 clat (usec): min=185, max=42112, avg=506.36, stdev=2781.07 00:09:08.954 lat (usec): min=198, max=42128, avg=524.79, stdev=2780.78 00:09:08.954 clat percentiles (usec): 00:09:08.954 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 210], 00:09:08.954 | 30.00th=[ 219], 40.00th=[ 269], 50.00th=[ 318], 60.00th=[ 347], 00:09:08.954 | 70.00th=[ 367], 80.00th=[ 388], 90.00th=[ 408], 95.00th=[ 437], 00:09:08.954 | 99.00th=[ 529], 99.50th=[17957], 99.90th=[42206], 99.95th=[42206], 00:09:08.954 | 99.99th=[42206] 00:09:08.954 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:08.954 slat (nsec): min=6690, max=53274, avg=15303.89, stdev=6849.34 00:09:08.954 clat (usec): min=129, max=393, avg=197.42, stdev=41.18 00:09:08.954 lat (usec): min=138, max=416, avg=212.73, stdev=41.49 00:09:08.954 clat percentiles (usec): 00:09:08.954 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:09:08.954 | 30.00th=[ 165], 40.00th=[ 176], 50.00th=[ 198], 60.00th=[ 210], 00:09:08.954 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 265], 00:09:08.954 | 99.00th=[ 322], 99.50th=[ 355], 99.90th=[ 392], 99.95th=[ 396], 00:09:08.954 | 99.99th=[ 396] 00:09:08.954 bw ( KiB/s): min= 4096, max= 4096, per=16.84%, avg=4096.00, stdev= 0.00, samples=1 00:09:08.954 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:08.954 lat (usec) : 250=68.12%, 500=31.34%, 750=0.29% 00:09:08.954 lat (msec) : 20=0.04%, 50=0.21% 00:09:08.954 cpu : usr=2.60%, sys=4.70%, ctx=2797, majf=0, minf=1 00:09:08.954 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.954 issued rwts: total=1259,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.954 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.954 job1: (groupid=0, jobs=1): err= 0: pid=2320373: Tue Dec 10 03:57:02 2024 00:09:08.954 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:08.954 slat (nsec): min=6075, max=54673, avg=13548.94, stdev=6756.39 00:09:08.954 clat (usec): min=187, max=582, avg=262.95, stdev=72.34 00:09:08.954 lat (usec): min=195, max=608, avg=276.50, stdev=75.88 00:09:08.954 clat percentiles (usec): 00:09:08.954 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 221], 00:09:08.954 | 30.00th=[ 231], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 251], 00:09:08.954 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 314], 95.00th=[ 457], 00:09:08.954 | 99.00th=[ 562], 99.50th=[ 570], 99.90th=[ 578], 99.95th=[ 586], 00:09:08.954 | 99.99th=[ 586] 00:09:08.954 write: IOPS=2098, BW=8396KiB/s (8597kB/s)(8404KiB/1001msec); 0 zone resets 00:09:08.954 slat (nsec): min=7093, max=57965, avg=16960.07, stdev=7538.46 00:09:08.954 clat (usec): min=130, max=812, avg=180.94, stdev=31.90 00:09:08.954 lat (usec): min=140, max=823, avg=197.90, stdev=35.80 00:09:08.954 clat percentiles (usec): 00:09:08.954 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 157], 00:09:08.954 | 30.00th=[ 163], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 184], 00:09:08.954 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 212], 95.00th=[ 229], 00:09:08.954 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 330], 99.95th=[ 330], 00:09:08.954 | 99.99th=[ 816] 00:09:08.954 bw ( KiB/s): min= 9064, max= 9064, per=37.26%, avg=9064.00, stdev= 0.00, samples=1 00:09:08.954 iops : min= 2266, max= 2266, avg=2266.00, stdev= 0.00, samples=1 00:09:08.954 lat (usec) : 250=77.22%, 500=21.02%, 750=1.74%, 1000=0.02% 00:09:08.954 cpu : usr=4.30%, sys=8.70%, ctx=4150, majf=0, minf=3 00:09:08.954 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.954 issued rwts: total=2048,2101,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.954 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.954 job2: (groupid=0, jobs=1): err= 0: pid=2320374: Tue Dec 10 03:57:02 2024 00:09:08.954 read: IOPS=686, BW=2748KiB/s (2814kB/s)(2800KiB/1019msec) 00:09:08.954 slat (nsec): min=6016, max=42998, avg=12711.45, stdev=5932.86 00:09:08.954 clat (usec): min=185, max=41946, avg=1117.32, stdev=5916.07 00:09:08.954 lat (usec): min=192, max=41981, avg=1130.03, stdev=5916.48 00:09:08.954 clat percentiles (usec): 00:09:08.954 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 217], 00:09:08.954 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 249], 00:09:08.954 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:09:08.954 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:08.954 | 99.99th=[42206] 00:09:08.954 write: IOPS=1004, BW=4020KiB/s (4116kB/s)(4096KiB/1019msec); 0 zone resets 00:09:08.954 slat (nsec): min=8204, max=56866, avg=20328.67, stdev=7327.24 00:09:08.954 clat (usec): min=149, max=246, avg=194.10, stdev=14.66 00:09:08.954 lat (usec): min=158, max=269, avg=214.43, stdev=18.37 00:09:08.954 clat percentiles (usec): 00:09:08.954 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 184], 00:09:08.954 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:09:08.954 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 219], 00:09:08.954 | 99.00th=[ 229], 99.50th=[ 233], 99.90th=[ 239], 99.95th=[ 247], 00:09:08.954 | 99.99th=[ 247] 00:09:08.954 bw ( KiB/s): min= 4096, max= 4096, per=16.84%, avg=4096.00, stdev= 0.00, samples=2 00:09:08.954 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:09:08.954 lat (usec) : 250=84.16%, 500=14.97% 00:09:08.954 lat (msec) : 50=0.87% 00:09:08.954 cpu : usr=1.87%, sys=3.93%, ctx=1725, majf=0, minf=2 00:09:08.954 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.954 issued rwts: total=700,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.954 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.954 job3: (groupid=0, jobs=1): err= 0: pid=2320375: Tue Dec 10 03:57:02 2024 00:09:08.954 read: IOPS=1060, BW=4244KiB/s (4346kB/s)(4248KiB/1001msec) 00:09:08.954 slat (nsec): min=4772, max=59111, avg=15878.53, stdev=10388.76 00:09:08.954 clat (usec): min=179, max=41342, avg=605.22, stdev=3542.21 00:09:08.954 lat (usec): min=184, max=41376, avg=621.10, stdev=3542.95 00:09:08.954 clat percentiles (usec): 00:09:08.954 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 206], 00:09:08.954 | 30.00th=[ 212], 40.00th=[ 227], 50.00th=[ 285], 60.00th=[ 314], 00:09:08.954 | 70.00th=[ 343], 80.00th=[ 367], 90.00th=[ 392], 95.00th=[ 420], 00:09:08.954 | 99.00th=[ 537], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:08.954 | 99.99th=[41157] 00:09:08.954 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:08.954 slat (nsec): min=6637, max=39707, avg=13411.78, stdev=5281.52 00:09:08.954 clat (usec): min=138, max=380, avg=201.59, stdev=35.54 00:09:08.954 lat (usec): min=146, max=405, avg=215.00, stdev=35.37 00:09:08.954 clat percentiles (usec): 00:09:08.954 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:09:08.954 | 30.00th=[ 176], 40.00th=[ 198], 50.00th=[ 206], 60.00th=[ 212], 00:09:08.954 | 70.00th=[ 221], 80.00th=[ 233], 90.00th=[ 247], 95.00th=[ 258], 00:09:08.954 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 379], 99.95th=[ 379], 00:09:08.954 | 99.99th=[ 379] 00:09:08.954 bw ( KiB/s): min= 8192, max= 8192, per=33.68%, avg=8192.00, stdev= 0.00, samples=1 00:09:08.954 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:08.954 lat (usec) : 250=73.17%, 500=26.37%, 750=0.12% 00:09:08.954 lat (msec) : 20=0.04%, 50=0.31% 00:09:08.954 cpu : usr=2.50%, sys=3.30%, ctx=2599, majf=0, minf=1 00:09:08.954 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.954 issued rwts: total=1062,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.954 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.954 00:09:08.954 Run status group 0 (all jobs): 00:09:08.954 READ: bw=19.4MiB/s (20.4MB/s), 2748KiB/s-8184KiB/s (2814kB/s-8380kB/s), io=19.8MiB (20.8MB), run=1001-1019msec 00:09:08.954 WRITE: bw=23.8MiB/s (24.9MB/s), 4020KiB/s-8396KiB/s (4116kB/s-8597kB/s), io=24.2MiB (25.4MB), run=1001-1019msec 00:09:08.954 00:09:08.954 Disk stats (read/write): 00:09:08.954 nvme0n1: ios=1073/1266, merge=0/0, ticks=943/236, in_queue=1179, util=86.07% 00:09:08.954 nvme0n2: ios=1559/1929, merge=0/0, ticks=1304/338, in_queue=1642, util=90.04% 00:09:08.954 nvme0n3: ios=708/1024, merge=0/0, ticks=1521/189, in_queue=1710, util=93.53% 00:09:08.954 nvme0n4: ios=963/1024, merge=0/0, ticks=1458/201, in_queue=1659, util=94.32% 00:09:08.954 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:08.954 [global] 00:09:08.954 thread=1 00:09:08.954 invalidate=1 00:09:08.954 rw=randwrite 00:09:08.954 time_based=1 00:09:08.954 runtime=1 00:09:08.954 ioengine=libaio 00:09:08.954 direct=1 00:09:08.955 bs=4096 00:09:08.955 iodepth=1 00:09:08.955 norandommap=0 00:09:08.955 numjobs=1 00:09:08.955 00:09:08.955 verify_dump=1 00:09:08.955 verify_backlog=512 00:09:08.955 verify_state_save=0 00:09:08.955 do_verify=1 00:09:08.955 verify=crc32c-intel 00:09:08.955 [job0] 00:09:08.955 filename=/dev/nvme0n1 00:09:08.955 [job1] 00:09:08.955 filename=/dev/nvme0n2 00:09:08.955 [job2] 00:09:08.955 filename=/dev/nvme0n3 00:09:08.955 [job3] 00:09:08.955 filename=/dev/nvme0n4 00:09:08.955 Could not set queue depth (nvme0n1) 00:09:08.955 Could not set queue depth (nvme0n2) 00:09:08.955 Could not set queue depth (nvme0n3) 00:09:08.955 Could not set queue depth (nvme0n4) 00:09:08.955 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:08.955 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:08.955 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:08.955 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:08.955 fio-3.35 00:09:08.955 Starting 4 threads 00:09:10.327 00:09:10.327 job0: (groupid=0, jobs=1): err= 0: pid=2320607: Tue Dec 10 03:57:04 2024 00:09:10.327 read: IOPS=21, BW=86.2KiB/s (88.3kB/s)(88.0KiB/1021msec) 00:09:10.327 slat (nsec): min=10052, max=18651, avg=16555.27, stdev=2102.09 00:09:10.328 clat (usec): min=40964, max=41979, avg=41085.25, stdev=298.06 00:09:10.328 lat (usec): min=40982, max=41997, avg=41101.80, stdev=298.26 00:09:10.328 clat percentiles (usec): 00:09:10.328 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:10.328 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:10.328 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:09:10.328 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:10.328 | 99.99th=[42206] 00:09:10.328 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:09:10.328 slat (nsec): min=8194, max=33249, avg=11541.46, stdev=3508.68 00:09:10.328 clat (usec): min=151, max=316, avg=210.10, stdev=32.74 00:09:10.328 lat (usec): min=160, max=326, avg=221.64, stdev=33.56 00:09:10.328 clat percentiles (usec): 00:09:10.328 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:09:10.328 | 30.00th=[ 184], 40.00th=[ 198], 50.00th=[ 221], 60.00th=[ 227], 00:09:10.328 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 258], 00:09:10.328 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 318], 99.95th=[ 318], 00:09:10.328 | 99.99th=[ 318] 00:09:10.328 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:10.328 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:10.328 lat (usec) : 250=86.52%, 500=9.36% 00:09:10.328 lat (msec) : 50=4.12% 00:09:10.328 cpu : usr=0.29%, sys=0.88%, ctx=535, majf=0, minf=1 00:09:10.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.328 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.328 job1: (groupid=0, jobs=1): err= 0: pid=2320608: Tue Dec 10 03:57:04 2024 00:09:10.328 read: IOPS=2294, BW=9179KiB/s (9399kB/s)(9188KiB/1001msec) 00:09:10.328 slat (nsec): min=7164, max=21288, avg=7817.47, stdev=859.50 00:09:10.328 clat (usec): min=177, max=522, avg=226.00, stdev=36.98 00:09:10.328 lat (usec): min=185, max=535, avg=233.82, stdev=37.18 00:09:10.328 clat percentiles (usec): 00:09:10.328 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 200], 00:09:10.328 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:09:10.328 | 70.00th=[ 229], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:09:10.328 | 99.00th=[ 297], 99.50th=[ 449], 99.90th=[ 519], 99.95th=[ 519], 00:09:10.328 | 99.99th=[ 523] 00:09:10.328 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:10.328 slat (usec): min=8, max=25293, avg=19.67, stdev=499.72 00:09:10.328 clat (usec): min=117, max=767, avg=155.86, stdev=36.93 00:09:10.328 lat (usec): min=127, max=25524, avg=175.52, stdev=502.58 00:09:10.328 clat percentiles (usec): 00:09:10.328 | 1.00th=[ 124], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 135], 00:09:10.328 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:09:10.328 | 70.00th=[ 151], 80.00th=[ 163], 90.00th=[ 229], 95.00th=[ 239], 00:09:10.328 | 99.00th=[ 253], 99.50th=[ 260], 99.90th=[ 273], 99.95th=[ 343], 00:09:10.328 | 99.99th=[ 766] 00:09:10.328 bw ( KiB/s): min=10160, max=10160, per=64.49%, avg=10160.00, stdev= 0.00, samples=1 00:09:10.328 iops : min= 2540, max= 2540, avg=2540.00, stdev= 0.00, samples=1 00:09:10.328 lat (usec) : 250=87.89%, 500=11.94%, 750=0.14%, 1000=0.02% 00:09:10.328 cpu : usr=3.00%, sys=6.00%, ctx=4860, majf=0, minf=1 00:09:10.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.328 issued rwts: total=2297,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.328 job2: (groupid=0, jobs=1): err= 0: pid=2320611: Tue Dec 10 03:57:04 2024 00:09:10.328 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:09:10.328 slat (nsec): min=9481, max=18597, avg=16191.91, stdev=2132.75 00:09:10.328 clat (usec): min=40774, max=41053, avg=40970.70, stdev=62.70 00:09:10.328 lat (usec): min=40783, max=41070, avg=40986.89, stdev=63.93 00:09:10.328 clat percentiles (usec): 00:09:10.328 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:10.328 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:10.328 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:10.328 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:10.328 | 99.99th=[41157] 00:09:10.328 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:09:10.328 slat (nsec): min=8318, max=31996, avg=11590.05, stdev=3287.31 00:09:10.328 clat (usec): min=159, max=296, avg=219.10, stdev=28.95 00:09:10.328 lat (usec): min=168, max=310, avg=230.69, stdev=29.69 00:09:10.328 clat percentiles (usec): 00:09:10.328 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 184], 00:09:10.328 | 30.00th=[ 212], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:09:10.328 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 255], 00:09:10.328 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 297], 99.95th=[ 297], 00:09:10.328 | 99.99th=[ 297] 00:09:10.328 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:10.328 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:10.328 lat (usec) : 250=86.70%, 500=9.18% 00:09:10.328 lat (msec) : 50=4.12% 00:09:10.328 cpu : usr=0.29%, sys=0.88%, ctx=536, majf=0, minf=1 00:09:10.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.328 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.328 job3: (groupid=0, jobs=1): err= 0: pid=2320612: Tue Dec 10 03:57:04 2024 00:09:10.328 read: IOPS=26, BW=108KiB/s (110kB/s)(112KiB/1040msec) 00:09:10.328 slat (nsec): min=8664, max=18670, avg=15000.50, stdev=3701.42 00:09:10.328 clat (usec): min=213, max=42228, avg=32953.83, stdev=17400.42 00:09:10.328 lat (usec): min=228, max=42239, avg=32968.83, stdev=17402.80 00:09:10.328 clat percentiles (usec): 00:09:10.328 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 241], 20.00th=[ 265], 00:09:10.328 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:10.328 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:10.328 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:10.328 | 99.99th=[42206] 00:09:10.328 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:09:10.328 slat (nsec): min=8259, max=31387, avg=12099.31, stdev=4546.05 00:09:10.328 clat (usec): min=148, max=362, avg=209.63, stdev=34.68 00:09:10.328 lat (usec): min=157, max=374, avg=221.73, stdev=35.82 00:09:10.328 clat percentiles (usec): 00:09:10.328 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:09:10.328 | 30.00th=[ 180], 40.00th=[ 204], 50.00th=[ 221], 60.00th=[ 227], 00:09:10.328 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 255], 00:09:10.328 | 99.00th=[ 277], 99.50th=[ 306], 99.90th=[ 363], 99.95th=[ 363], 00:09:10.328 | 99.99th=[ 363] 00:09:10.328 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:10.328 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:10.328 lat (usec) : 250=88.33%, 500=7.59% 00:09:10.328 lat (msec) : 50=4.07% 00:09:10.328 cpu : usr=0.38%, sys=0.77%, ctx=541, majf=0, minf=1 00:09:10.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.328 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.328 00:09:10.328 Run status group 0 (all jobs): 00:09:10.328 READ: bw=9112KiB/s (9330kB/s), 86.0KiB/s-9179KiB/s (88.1kB/s-9399kB/s), io=9476KiB (9703kB), run=1001-1040msec 00:09:10.328 WRITE: bw=15.4MiB/s (16.1MB/s), 1969KiB/s-9.99MiB/s (2016kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1040msec 00:09:10.328 00:09:10.328 Disk stats (read/write): 00:09:10.328 nvme0n1: ios=60/512, merge=0/0, ticks=1089/108, in_queue=1197, util=86.57% 00:09:10.328 nvme0n2: ios=2021/2048, merge=0/0, ticks=576/301, in_queue=877, util=90.66% 00:09:10.328 nvme0n3: ios=74/512, merge=0/0, ticks=1546/110, in_queue=1656, util=93.53% 00:09:10.328 nvme0n4: ios=81/512, merge=0/0, ticks=1522/98, in_queue=1620, util=94.32% 00:09:10.328 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:10.328 [global] 00:09:10.328 thread=1 00:09:10.328 invalidate=1 00:09:10.328 rw=write 00:09:10.328 time_based=1 00:09:10.328 runtime=1 00:09:10.328 ioengine=libaio 00:09:10.328 direct=1 00:09:10.328 bs=4096 00:09:10.328 iodepth=128 00:09:10.328 norandommap=0 00:09:10.328 numjobs=1 00:09:10.328 00:09:10.328 verify_dump=1 00:09:10.328 verify_backlog=512 00:09:10.328 verify_state_save=0 00:09:10.328 do_verify=1 00:09:10.328 verify=crc32c-intel 00:09:10.328 [job0] 00:09:10.328 filename=/dev/nvme0n1 00:09:10.328 [job1] 00:09:10.328 filename=/dev/nvme0n2 00:09:10.328 [job2] 00:09:10.328 filename=/dev/nvme0n3 00:09:10.328 [job3] 00:09:10.328 filename=/dev/nvme0n4 00:09:10.328 Could not set queue depth (nvme0n1) 00:09:10.328 Could not set queue depth (nvme0n2) 00:09:10.328 Could not set queue depth (nvme0n3) 00:09:10.328 Could not set queue depth (nvme0n4) 00:09:10.328 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:10.328 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:10.328 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:10.328 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:10.328 fio-3.35 00:09:10.328 Starting 4 threads 00:09:11.710 00:09:11.710 job0: (groupid=0, jobs=1): err= 0: pid=2320959: Tue Dec 10 03:57:05 2024 00:09:11.710 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:09:11.710 slat (usec): min=3, max=9089, avg=109.82, stdev=565.47 00:09:11.710 clat (usec): min=5759, max=33669, avg=14659.70, stdev=4364.61 00:09:11.710 lat (usec): min=5766, max=33682, avg=14769.52, stdev=4410.41 00:09:11.710 clat percentiles (usec): 00:09:11.710 | 1.00th=[ 8586], 5.00th=[10421], 10.00th=[10814], 20.00th=[11076], 00:09:11.710 | 30.00th=[11338], 40.00th=[11731], 50.00th=[13304], 60.00th=[16319], 00:09:11.710 | 70.00th=[16909], 80.00th=[18220], 90.00th=[19530], 95.00th=[22414], 00:09:11.710 | 99.00th=[28443], 99.50th=[29492], 99.90th=[29492], 99.95th=[31851], 00:09:11.710 | 99.99th=[33817] 00:09:11.710 write: IOPS=4604, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1004msec); 0 zone resets 00:09:11.710 slat (usec): min=3, max=8625, avg=94.56, stdev=508.50 00:09:11.710 clat (usec): min=603, max=28044, avg=12961.00, stdev=4304.76 00:09:11.710 lat (usec): min=607, max=28063, avg=13055.57, stdev=4345.52 00:09:11.710 clat percentiles (usec): 00:09:11.710 | 1.00th=[ 3589], 5.00th=[ 8225], 10.00th=[ 9372], 20.00th=[10552], 00:09:11.710 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11600], 60.00th=[12387], 00:09:11.710 | 70.00th=[12649], 80.00th=[15533], 90.00th=[19530], 95.00th=[22152], 00:09:11.710 | 99.00th=[25822], 99.50th=[26084], 99.90th=[27919], 99.95th=[27919], 00:09:11.710 | 99.99th=[27919] 00:09:11.710 bw ( KiB/s): min=15176, max=21644, per=31.65%, avg=18410.00, stdev=4573.57, samples=2 00:09:11.710 iops : min= 3794, max= 5411, avg=4602.50, stdev=1143.39, samples=2 00:09:11.710 lat (usec) : 750=0.04% 00:09:11.710 lat (msec) : 2=0.10%, 4=0.43%, 10=6.84%, 20=84.02%, 50=8.57% 00:09:11.710 cpu : usr=6.98%, sys=8.57%, ctx=362, majf=0, minf=2 00:09:11.710 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:11.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.710 issued rwts: total=4608,4623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.710 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.710 job1: (groupid=0, jobs=1): err= 0: pid=2320960: Tue Dec 10 03:57:05 2024 00:09:11.710 read: IOPS=2714, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1005msec) 00:09:11.710 slat (usec): min=3, max=16332, avg=152.84, stdev=985.77 00:09:11.710 clat (usec): min=932, max=45500, avg=18796.53, stdev=6787.36 00:09:11.710 lat (usec): min=5315, max=52109, avg=18949.37, stdev=6872.60 00:09:11.710 clat percentiles (usec): 00:09:11.710 | 1.00th=[ 6456], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[13304], 00:09:11.710 | 30.00th=[15139], 40.00th=[16909], 50.00th=[18220], 60.00th=[18744], 00:09:11.710 | 70.00th=[21365], 80.00th=[23725], 90.00th=[29230], 95.00th=[29754], 00:09:11.710 | 99.00th=[38011], 99.50th=[39060], 99.90th=[44827], 99.95th=[44827], 00:09:11.710 | 99.99th=[45351] 00:09:11.710 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:09:11.710 slat (usec): min=3, max=16769, avg=181.01, stdev=813.91 00:09:11.710 clat (usec): min=8288, max=63520, avg=24816.26, stdev=11058.52 00:09:11.710 lat (usec): min=8303, max=63546, avg=24997.27, stdev=11141.68 00:09:11.710 clat percentiles (usec): 00:09:11.710 | 1.00th=[10421], 5.00th=[13304], 10.00th=[13698], 20.00th=[17171], 00:09:11.710 | 30.00th=[19530], 40.00th=[22152], 50.00th=[22676], 60.00th=[23462], 00:09:11.710 | 70.00th=[25297], 80.00th=[29230], 90.00th=[32900], 95.00th=[55313], 00:09:11.710 | 99.00th=[62653], 99.50th=[63177], 99.90th=[63701], 99.95th=[63701], 00:09:11.710 | 99.99th=[63701] 00:09:11.710 bw ( KiB/s): min=12263, max=12288, per=21.11%, avg=12275.50, stdev=17.68, samples=2 00:09:11.710 iops : min= 3065, max= 3072, avg=3068.50, stdev= 4.95, samples=2 00:09:11.710 lat (usec) : 1000=0.02% 00:09:11.710 lat (msec) : 10=4.90%, 20=42.38%, 50=49.29%, 100=3.41% 00:09:11.710 cpu : usr=3.69%, sys=4.98%, ctx=354, majf=0, minf=2 00:09:11.710 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:11.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.710 issued rwts: total=2728,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.710 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.710 job2: (groupid=0, jobs=1): err= 0: pid=2320961: Tue Dec 10 03:57:05 2024 00:09:11.710 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:09:11.710 slat (usec): min=3, max=9689, avg=135.83, stdev=813.40 00:09:11.710 clat (usec): min=9880, max=28387, avg=16999.41, stdev=2913.15 00:09:11.710 lat (usec): min=9888, max=28395, avg=17135.23, stdev=2995.41 00:09:11.710 clat percentiles (usec): 00:09:11.710 | 1.00th=[11207], 5.00th=[13960], 10.00th=[14222], 20.00th=[14615], 00:09:11.710 | 30.00th=[15270], 40.00th=[15664], 50.00th=[16450], 60.00th=[17171], 00:09:11.710 | 70.00th=[17695], 80.00th=[18744], 90.00th=[20841], 95.00th=[23200], 00:09:11.710 | 99.00th=[26084], 99.50th=[27657], 99.90th=[28443], 99.95th=[28443], 00:09:11.710 | 99.99th=[28443] 00:09:11.710 write: IOPS=3451, BW=13.5MiB/s (14.1MB/s)(13.6MiB/1008msec); 0 zone resets 00:09:11.710 slat (usec): min=5, max=26384, avg=160.94, stdev=870.85 00:09:11.710 clat (usec): min=7706, max=49161, avg=21234.82, stdev=7098.38 00:09:11.710 lat (usec): min=8854, max=49182, avg=21395.75, stdev=7156.55 00:09:11.710 clat percentiles (usec): 00:09:11.710 | 1.00th=[11469], 5.00th=[13566], 10.00th=[13960], 20.00th=[14877], 00:09:11.710 | 30.00th=[16581], 40.00th=[19792], 50.00th=[21103], 60.00th=[21627], 00:09:11.710 | 70.00th=[22152], 80.00th=[24249], 90.00th=[30540], 95.00th=[38536], 00:09:11.710 | 99.00th=[46400], 99.50th=[46924], 99.90th=[49021], 99.95th=[49021], 00:09:11.710 | 99.99th=[49021] 00:09:11.710 bw ( KiB/s): min=12263, max=14528, per=23.03%, avg=13395.50, stdev=1601.60, samples=2 00:09:11.710 iops : min= 3065, max= 3632, avg=3348.50, stdev=400.93, samples=2 00:09:11.711 lat (msec) : 10=0.32%, 20=61.70%, 50=37.98% 00:09:11.711 cpu : usr=2.38%, sys=5.26%, ctx=349, majf=0, minf=1 00:09:11.711 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.711 issued rwts: total=3072,3479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.711 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.711 job3: (groupid=0, jobs=1): err= 0: pid=2320962: Tue Dec 10 03:57:05 2024 00:09:11.711 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:09:11.711 slat (usec): min=2, max=18439, avg=165.45, stdev=1087.67 00:09:11.711 clat (usec): min=9245, max=56857, avg=20816.74, stdev=9020.02 00:09:11.711 lat (usec): min=9273, max=56873, avg=20982.19, stdev=9128.99 00:09:11.711 clat percentiles (usec): 00:09:11.711 | 1.00th=[10552], 5.00th=[12518], 10.00th=[13698], 20.00th=[13960], 00:09:11.711 | 30.00th=[14222], 40.00th=[14615], 50.00th=[16188], 60.00th=[18482], 00:09:11.711 | 70.00th=[26084], 80.00th=[30540], 90.00th=[34341], 95.00th=[36439], 00:09:11.711 | 99.00th=[46924], 99.50th=[47973], 99.90th=[51119], 99.95th=[55313], 00:09:11.711 | 99.99th=[56886] 00:09:11.711 write: IOPS=3457, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1007msec); 0 zone resets 00:09:11.711 slat (usec): min=3, max=6662, avg=131.77, stdev=522.11 00:09:11.711 clat (usec): min=5621, max=44828, avg=18196.79, stdev=5596.79 00:09:11.711 lat (usec): min=7001, max=44837, avg=18328.56, stdev=5630.11 00:09:11.711 clat percentiles (usec): 00:09:11.711 | 1.00th=[ 9634], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:09:11.711 | 30.00th=[14353], 40.00th=[15139], 50.00th=[16319], 60.00th=[18744], 00:09:11.711 | 70.00th=[21103], 80.00th=[21627], 90.00th=[22676], 95.00th=[29492], 00:09:11.711 | 99.00th=[39584], 99.50th=[40633], 99.90th=[44827], 99.95th=[44827], 00:09:11.711 | 99.99th=[44827] 00:09:11.711 bw ( KiB/s): min=10448, max=16384, per=23.07%, avg=13416.00, stdev=4197.39, samples=2 00:09:11.711 iops : min= 2612, max= 4096, avg=3354.00, stdev=1049.35, samples=2 00:09:11.711 lat (msec) : 10=0.82%, 20=62.08%, 50=37.03%, 100=0.06% 00:09:11.711 cpu : usr=3.68%, sys=7.75%, ctx=389, majf=0, minf=1 00:09:11.711 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.711 issued rwts: total=3072,3482,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.711 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.711 00:09:11.711 Run status group 0 (all jobs): 00:09:11.711 READ: bw=52.2MiB/s (54.8MB/s), 10.6MiB/s-17.9MiB/s (11.1MB/s-18.8MB/s), io=52.7MiB (55.2MB), run=1004-1008msec 00:09:11.711 WRITE: bw=56.8MiB/s (59.6MB/s), 11.9MiB/s-18.0MiB/s (12.5MB/s-18.9MB/s), io=57.2MiB (60.0MB), run=1004-1008msec 00:09:11.711 00:09:11.711 Disk stats (read/write): 00:09:11.711 nvme0n1: ios=4140/4127, merge=0/0, ticks=19100/16450, in_queue=35550, util=90.98% 00:09:11.711 nvme0n2: ios=2244/2560, merge=0/0, ticks=19608/26546, in_queue=46154, util=95.33% 00:09:11.711 nvme0n3: ios=2611/2808, merge=0/0, ticks=22777/29775, in_queue=52552, util=98.44% 00:09:11.711 nvme0n4: ios=2865/3072, merge=0/0, ticks=20242/20034, in_queue=40276, util=91.40% 00:09:11.711 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:11.711 [global] 00:09:11.711 thread=1 00:09:11.711 invalidate=1 00:09:11.711 rw=randwrite 00:09:11.711 time_based=1 00:09:11.711 runtime=1 00:09:11.711 ioengine=libaio 00:09:11.711 direct=1 00:09:11.711 bs=4096 00:09:11.711 iodepth=128 00:09:11.711 norandommap=0 00:09:11.711 numjobs=1 00:09:11.711 00:09:11.711 verify_dump=1 00:09:11.711 verify_backlog=512 00:09:11.711 verify_state_save=0 00:09:11.711 do_verify=1 00:09:11.711 verify=crc32c-intel 00:09:11.711 [job0] 00:09:11.711 filename=/dev/nvme0n1 00:09:11.711 [job1] 00:09:11.711 filename=/dev/nvme0n2 00:09:11.711 [job2] 00:09:11.711 filename=/dev/nvme0n3 00:09:11.711 [job3] 00:09:11.711 filename=/dev/nvme0n4 00:09:11.711 Could not set queue depth (nvme0n1) 00:09:11.711 Could not set queue depth (nvme0n2) 00:09:11.711 Could not set queue depth (nvme0n3) 00:09:11.711 Could not set queue depth (nvme0n4) 00:09:11.969 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.969 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.969 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.969 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.969 fio-3.35 00:09:11.969 Starting 4 threads 00:09:13.342 00:09:13.342 job0: (groupid=0, jobs=1): err= 0: pid=2321192: Tue Dec 10 03:57:07 2024 00:09:13.342 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:09:13.342 slat (usec): min=2, max=12911, avg=126.49, stdev=840.25 00:09:13.342 clat (usec): min=7342, max=36959, avg=16665.45, stdev=5615.64 00:09:13.342 lat (usec): min=7348, max=37002, avg=16791.94, stdev=5681.21 00:09:13.342 clat percentiles (usec): 00:09:13.342 | 1.00th=[ 7832], 5.00th=[10421], 10.00th=[11207], 20.00th=[11600], 00:09:13.342 | 30.00th=[12780], 40.00th=[13829], 50.00th=[14746], 60.00th=[16909], 00:09:13.342 | 70.00th=[19006], 80.00th=[20841], 90.00th=[25035], 95.00th=[28705], 00:09:13.342 | 99.00th=[32637], 99.50th=[32900], 99.90th=[32900], 99.95th=[36439], 00:09:13.342 | 99.99th=[36963] 00:09:13.342 write: IOPS=3979, BW=15.5MiB/s (16.3MB/s)(15.7MiB/1009msec); 0 zone resets 00:09:13.342 slat (usec): min=3, max=12295, avg=126.41, stdev=804.15 00:09:13.342 clat (usec): min=7029, max=39913, avg=17014.27, stdev=6282.80 00:09:13.342 lat (usec): min=7045, max=39924, avg=17140.67, stdev=6351.80 00:09:13.342 clat percentiles (usec): 00:09:13.342 | 1.00th=[ 7701], 5.00th=[10028], 10.00th=[10552], 20.00th=[10945], 00:09:13.342 | 30.00th=[12256], 40.00th=[13829], 50.00th=[15008], 60.00th=[17957], 00:09:13.342 | 70.00th=[20317], 80.00th=[22938], 90.00th=[25560], 95.00th=[26870], 00:09:13.342 | 99.00th=[35390], 99.50th=[38536], 99.90th=[40109], 99.95th=[40109], 00:09:13.342 | 99.99th=[40109] 00:09:13.342 bw ( KiB/s): min=11968, max=19136, per=25.04%, avg=15552.00, stdev=5068.54, samples=2 00:09:13.342 iops : min= 2992, max= 4784, avg=3888.00, stdev=1267.14, samples=2 00:09:13.342 lat (msec) : 10=3.79%, 20=65.76%, 50=30.45% 00:09:13.342 cpu : usr=3.77%, sys=6.65%, ctx=247, majf=0, minf=1 00:09:13.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:13.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.343 issued rwts: total=3584,4015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.343 job1: (groupid=0, jobs=1): err= 0: pid=2321193: Tue Dec 10 03:57:07 2024 00:09:13.343 read: IOPS=3816, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1008msec) 00:09:13.343 slat (usec): min=2, max=13248, avg=121.76, stdev=817.12 00:09:13.343 clat (usec): min=4696, max=53332, avg=16080.86, stdev=9513.29 00:09:13.343 lat (usec): min=4705, max=53347, avg=16202.62, stdev=9590.20 00:09:13.343 clat percentiles (usec): 00:09:13.343 | 1.00th=[ 6849], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[ 9896], 00:09:13.343 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[12125], 00:09:13.343 | 70.00th=[15795], 80.00th=[22414], 90.00th=[33817], 95.00th=[38536], 00:09:13.343 | 99.00th=[42206], 99.50th=[42730], 99.90th=[45351], 99.95th=[49021], 00:09:13.343 | 99.99th=[53216] 00:09:13.343 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:09:13.343 slat (usec): min=3, max=14749, avg=122.64, stdev=794.29 00:09:13.343 clat (usec): min=6311, max=74602, avg=16099.07, stdev=10876.61 00:09:13.343 lat (usec): min=6323, max=74618, avg=16221.70, stdev=10958.63 00:09:13.343 clat percentiles (usec): 00:09:13.343 | 1.00th=[ 7504], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10290], 00:09:13.343 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11338], 60.00th=[13960], 00:09:13.343 | 70.00th=[15795], 80.00th=[19006], 90.00th=[25822], 95.00th=[38536], 00:09:13.343 | 99.00th=[66323], 99.50th=[70779], 99.90th=[73925], 99.95th=[73925], 00:09:13.343 | 99.99th=[74974] 00:09:13.343 bw ( KiB/s): min=16384, max=16384, per=26.38%, avg=16384.00, stdev= 0.00, samples=2 00:09:13.343 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:13.343 lat (msec) : 10=17.83%, 20=61.44%, 50=19.11%, 100=1.62% 00:09:13.343 cpu : usr=3.67%, sys=5.16%, ctx=332, majf=0, minf=1 00:09:13.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:13.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.343 issued rwts: total=3847,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.343 job2: (groupid=0, jobs=1): err= 0: pid=2321194: Tue Dec 10 03:57:07 2024 00:09:13.343 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:09:13.343 slat (usec): min=2, max=15806, avg=164.82, stdev=1066.86 00:09:13.343 clat (usec): min=7190, max=57714, avg=21050.66, stdev=9949.19 00:09:13.343 lat (usec): min=7217, max=57753, avg=21215.48, stdev=10045.78 00:09:13.343 clat percentiles (usec): 00:09:13.343 | 1.00th=[ 9110], 5.00th=[10552], 10.00th=[12125], 20.00th=[12649], 00:09:13.343 | 30.00th=[12911], 40.00th=[14877], 50.00th=[17957], 60.00th=[20579], 00:09:13.343 | 70.00th=[24511], 80.00th=[30540], 90.00th=[38011], 95.00th=[39584], 00:09:13.343 | 99.00th=[44303], 99.50th=[48497], 99.90th=[52167], 99.95th=[53740], 00:09:13.343 | 99.99th=[57934] 00:09:13.343 write: IOPS=3433, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1007msec); 0 zone resets 00:09:13.343 slat (usec): min=3, max=25060, avg=135.45, stdev=737.00 00:09:13.343 clat (usec): min=6005, max=51809, avg=17739.92, stdev=7799.40 00:09:13.343 lat (usec): min=6023, max=60073, avg=17875.37, stdev=7858.49 00:09:13.343 clat percentiles (usec): 00:09:13.343 | 1.00th=[ 5997], 5.00th=[ 6718], 10.00th=[10552], 20.00th=[12125], 00:09:13.343 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13566], 60.00th=[18482], 00:09:13.343 | 70.00th=[23200], 80.00th=[25035], 90.00th=[28967], 95.00th=[31065], 00:09:13.343 | 99.00th=[40633], 99.50th=[42730], 99.90th=[48497], 99.95th=[48497], 00:09:13.343 | 99.99th=[51643] 00:09:13.343 bw ( KiB/s): min=10280, max=16368, per=21.46%, avg=13324.00, stdev=4304.87, samples=2 00:09:13.343 iops : min= 2570, max= 4092, avg=3331.00, stdev=1076.22, samples=2 00:09:13.343 lat (msec) : 10=6.40%, 20=55.11%, 50=38.39%, 100=0.09% 00:09:13.343 cpu : usr=2.78%, sys=5.17%, ctx=419, majf=0, minf=1 00:09:13.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:13.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.343 issued rwts: total=3072,3458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.343 job3: (groupid=0, jobs=1): err= 0: pid=2321195: Tue Dec 10 03:57:07 2024 00:09:13.343 read: IOPS=3846, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1007msec) 00:09:13.343 slat (usec): min=2, max=21002, avg=142.40, stdev=1016.58 00:09:13.343 clat (usec): min=3366, max=64829, avg=18106.30, stdev=10255.62 00:09:13.343 lat (usec): min=3372, max=64832, avg=18248.70, stdev=10320.52 00:09:13.343 clat percentiles (usec): 00:09:13.343 | 1.00th=[ 5342], 5.00th=[ 9503], 10.00th=[10683], 20.00th=[12256], 00:09:13.343 | 30.00th=[12911], 40.00th=[13698], 50.00th=[14353], 60.00th=[15139], 00:09:13.343 | 70.00th=[20055], 80.00th=[22938], 90.00th=[28181], 95.00th=[39584], 00:09:13.343 | 99.00th=[64226], 99.50th=[64226], 99.90th=[64750], 99.95th=[64750], 00:09:13.343 | 99.99th=[64750] 00:09:13.343 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:09:13.343 slat (usec): min=3, max=10925, avg=96.76, stdev=458.74 00:09:13.343 clat (usec): min=792, max=31229, avg=14087.49, stdev=5458.14 00:09:13.343 lat (usec): min=799, max=31243, avg=14184.25, stdev=5499.88 00:09:13.343 clat percentiles (usec): 00:09:13.343 | 1.00th=[ 3294], 5.00th=[ 5407], 10.00th=[ 7308], 20.00th=[11207], 00:09:13.343 | 30.00th=[11994], 40.00th=[12780], 50.00th=[13304], 60.00th=[14091], 00:09:13.343 | 70.00th=[14353], 80.00th=[15795], 90.00th=[24773], 95.00th=[25297], 00:09:13.343 | 99.00th=[27919], 99.50th=[29492], 99.90th=[30802], 99.95th=[30802], 00:09:13.343 | 99.99th=[31327] 00:09:13.343 bw ( KiB/s): min=16400, max=16400, per=26.41%, avg=16400.00, stdev= 0.00, samples=2 00:09:13.343 iops : min= 4100, max= 4100, avg=4100.00, stdev= 0.00, samples=2 00:09:13.343 lat (usec) : 1000=0.03% 00:09:13.343 lat (msec) : 4=1.15%, 10=9.78%, 20=67.19%, 50=20.03%, 100=1.83% 00:09:13.343 cpu : usr=3.18%, sys=6.36%, ctx=473, majf=0, minf=1 00:09:13.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:13.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.343 issued rwts: total=3873,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.343 00:09:13.343 Run status group 0 (all jobs): 00:09:13.343 READ: bw=55.7MiB/s (58.4MB/s), 11.9MiB/s-15.0MiB/s (12.5MB/s-15.8MB/s), io=56.2MiB (58.9MB), run=1007-1009msec 00:09:13.343 WRITE: bw=60.6MiB/s (63.6MB/s), 13.4MiB/s-15.9MiB/s (14.1MB/s-16.7MB/s), io=61.2MiB (64.2MB), run=1007-1009msec 00:09:13.343 00:09:13.343 Disk stats (read/write): 00:09:13.343 nvme0n1: ios=3143/3584, merge=0/0, ticks=24255/28440, in_queue=52695, util=90.98% 00:09:13.343 nvme0n2: ios=3125/3166, merge=0/0, ticks=20176/19041, in_queue=39217, util=93.39% 00:09:13.344 nvme0n3: ios=2684/3072, merge=0/0, ticks=25111/24484, in_queue=49595, util=96.24% 00:09:13.344 nvme0n4: ios=3122/3383, merge=0/0, ticks=38985/38564, in_queue=77549, util=98.42% 00:09:13.344 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:13.344 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2321332 00:09:13.344 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:13.344 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:13.344 [global] 00:09:13.344 thread=1 00:09:13.344 invalidate=1 00:09:13.344 rw=read 00:09:13.344 time_based=1 00:09:13.344 runtime=10 00:09:13.344 ioengine=libaio 00:09:13.344 direct=1 00:09:13.344 bs=4096 00:09:13.344 iodepth=1 00:09:13.344 norandommap=1 00:09:13.344 numjobs=1 00:09:13.344 00:09:13.344 [job0] 00:09:13.344 filename=/dev/nvme0n1 00:09:13.344 [job1] 00:09:13.344 filename=/dev/nvme0n2 00:09:13.344 [job2] 00:09:13.344 filename=/dev/nvme0n3 00:09:13.344 [job3] 00:09:13.344 filename=/dev/nvme0n4 00:09:13.344 Could not set queue depth (nvme0n1) 00:09:13.344 Could not set queue depth (nvme0n2) 00:09:13.344 Could not set queue depth (nvme0n3) 00:09:13.344 Could not set queue depth (nvme0n4) 00:09:13.344 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.344 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.344 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.344 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.344 fio-3.35 00:09:13.344 Starting 4 threads 00:09:16.624 03:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:16.624 03:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:16.624 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=589824, buflen=4096 00:09:16.624 fio: pid=2321545, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:16.624 03:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:16.624 03:57:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:16.624 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=41705472, buflen=4096 00:09:16.624 fio: pid=2321535, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:16.882 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=45178880, buflen=4096 00:09:16.882 fio: pid=2321492, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:16.882 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:16.882 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:17.449 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:17.449 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:17.449 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=45993984, buflen=4096 00:09:17.449 fio: pid=2321508, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:09:17.449 00:09:17.449 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2321492: Tue Dec 10 03:57:11 2024 00:09:17.449 read: IOPS=3137, BW=12.3MiB/s (12.8MB/s)(43.1MiB/3516msec) 00:09:17.449 slat (usec): min=5, max=29979, avg=17.63, stdev=414.75 00:09:17.449 clat (usec): min=175, max=41869, avg=295.84, stdev=556.59 00:09:17.449 lat (usec): min=181, max=41876, avg=313.47, stdev=693.87 00:09:17.449 clat percentiles (usec): 00:09:17.449 | 1.00th=[ 198], 5.00th=[ 221], 10.00th=[ 231], 20.00th=[ 245], 00:09:17.449 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 289], 00:09:17.449 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 343], 95.00th=[ 388], 00:09:17.449 | 99.00th=[ 545], 99.50th=[ 578], 99.90th=[ 783], 99.95th=[ 1057], 00:09:17.449 | 99.99th=[40633] 00:09:17.449 bw ( KiB/s): min=11072, max=14184, per=36.82%, avg=12616.00, stdev=1319.45, samples=6 00:09:17.449 iops : min= 2768, max= 3546, avg=3154.00, stdev=329.86, samples=6 00:09:17.449 lat (usec) : 250=23.72%, 500=74.35%, 750=1.79%, 1000=0.05% 00:09:17.449 lat (msec) : 2=0.05%, 50=0.02% 00:09:17.449 cpu : usr=2.02%, sys=4.98%, ctx=11036, majf=0, minf=2 00:09:17.449 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.449 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.449 issued rwts: total=11031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.449 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.449 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2321508: Tue Dec 10 03:57:11 2024 00:09:17.449 read: IOPS=2952, BW=11.5MiB/s (12.1MB/s)(43.9MiB/3804msec) 00:09:17.449 slat (usec): min=4, max=26986, avg=19.77, stdev=375.38 00:09:17.449 clat (usec): min=163, max=41082, avg=316.28, stdev=1037.75 00:09:17.449 lat (usec): min=168, max=41090, avg=335.44, stdev=1101.73 00:09:17.449 clat percentiles (usec): 00:09:17.449 | 1.00th=[ 186], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 219], 00:09:17.449 | 30.00th=[ 233], 40.00th=[ 251], 50.00th=[ 269], 60.00th=[ 285], 00:09:17.449 | 70.00th=[ 314], 80.00th=[ 359], 90.00th=[ 400], 95.00th=[ 445], 00:09:17.449 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 644], 99.95th=[41157], 00:09:17.449 | 99.99th=[41157] 00:09:17.449 bw ( KiB/s): min=10504, max=13956, per=36.31%, avg=12440.57, stdev=1394.16, samples=7 00:09:17.449 iops : min= 2626, max= 3489, avg=3110.14, stdev=348.54, samples=7 00:09:17.449 lat (usec) : 250=39.71%, 500=58.23%, 750=1.96% 00:09:17.449 lat (msec) : 2=0.01%, 4=0.01%, 20=0.02%, 50=0.06% 00:09:17.449 cpu : usr=1.47%, sys=4.08%, ctx=11238, majf=0, minf=1 00:09:17.449 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.449 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.449 issued rwts: total=11230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.449 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.449 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2321535: Tue Dec 10 03:57:11 2024 00:09:17.449 read: IOPS=3156, BW=12.3MiB/s (12.9MB/s)(39.8MiB/3226msec) 00:09:17.449 slat (usec): min=4, max=11618, avg=12.80, stdev=134.17 00:09:17.449 clat (usec): min=193, max=40853, avg=298.51, stdev=650.50 00:09:17.449 lat (usec): min=199, max=40863, avg=311.31, stdev=664.49 00:09:17.449 clat percentiles (usec): 00:09:17.449 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 247], 00:09:17.449 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:09:17.449 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 363], 95.00th=[ 396], 00:09:17.449 | 99.00th=[ 486], 99.50th=[ 529], 99.90th=[ 889], 99.95th=[ 3589], 00:09:17.449 | 99.99th=[40109] 00:09:17.449 bw ( KiB/s): min=10944, max=13936, per=37.24%, avg=12761.33, stdev=1330.92, samples=6 00:09:17.449 iops : min= 2736, max= 3484, avg=3190.33, stdev=332.73, samples=6 00:09:17.449 lat (usec) : 250=22.99%, 500=76.23%, 750=0.62%, 1000=0.06% 00:09:17.449 lat (msec) : 2=0.03%, 4=0.02%, 10=0.01%, 50=0.04% 00:09:17.450 cpu : usr=1.83%, sys=5.09%, ctx=10185, majf=0, minf=2 00:09:17.450 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.450 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.450 issued rwts: total=10183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.450 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.450 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2321545: Tue Dec 10 03:57:11 2024 00:09:17.450 read: IOPS=49, BW=197KiB/s (201kB/s)(576KiB/2931msec) 00:09:17.450 slat (nsec): min=7782, max=44896, avg=18619.11, stdev=6858.78 00:09:17.450 clat (usec): min=235, max=43008, avg=20164.80, stdev=20327.14 00:09:17.450 lat (usec): min=254, max=43025, avg=20183.31, stdev=20325.96 00:09:17.450 clat percentiles (usec): 00:09:17.450 | 1.00th=[ 247], 5.00th=[ 277], 10.00th=[ 297], 20.00th=[ 330], 00:09:17.450 | 30.00th=[ 347], 40.00th=[ 383], 50.00th=[ 1205], 60.00th=[40633], 00:09:17.450 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:17.450 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:17.450 | 99.99th=[43254] 00:09:17.450 bw ( KiB/s): min= 136, max= 400, per=0.62%, avg=211.20, stdev=106.97, samples=5 00:09:17.450 iops : min= 34, max= 100, avg=52.80, stdev=26.74, samples=5 00:09:17.450 lat (usec) : 250=2.07%, 500=44.83%, 750=1.38% 00:09:17.450 lat (msec) : 2=1.38%, 10=1.38%, 50=48.28% 00:09:17.450 cpu : usr=0.00%, sys=0.17%, ctx=146, majf=0, minf=1 00:09:17.450 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.450 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.450 issued rwts: total=145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.450 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.450 00:09:17.450 Run status group 0 (all jobs): 00:09:17.450 READ: bw=33.5MiB/s (35.1MB/s), 197KiB/s-12.3MiB/s (201kB/s-12.9MB/s), io=127MiB (133MB), run=2931-3804msec 00:09:17.450 00:09:17.450 Disk stats (read/write): 00:09:17.450 nvme0n1: ios=10498/0, merge=0/0, ticks=3021/0, in_queue=3021, util=93.62% 00:09:17.450 nvme0n2: ios=11224/0, merge=0/0, ticks=3247/0, in_queue=3247, util=94.21% 00:09:17.450 nvme0n3: ios=9878/0, merge=0/0, ticks=2833/0, in_queue=2833, util=96.19% 00:09:17.450 nvme0n4: ios=183/0, merge=0/0, ticks=2980/0, in_queue=2980, util=99.86% 00:09:17.450 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:17.450 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:17.708 03:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:17.708 03:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:18.274 03:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:18.274 03:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:18.274 03:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:18.274 03:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:18.532 03:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:18.532 03:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2321332 00:09:18.532 03:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:18.532 03:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.791 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:18.791 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:18.791 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:18.791 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.791 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:18.791 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.791 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:18.791 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:18.791 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:18.791 nvmf hotplug test: fio failed as expected 00:09:18.791 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.049 rmmod nvme_tcp 00:09:19.049 rmmod nvme_fabrics 00:09:19.049 rmmod nvme_keyring 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2319181 ']' 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2319181 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2319181 ']' 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2319181 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2319181 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2319181' 00:09:19.049 killing process with pid 2319181 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2319181 00:09:19.049 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2319181 00:09:19.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:19.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:19.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.847 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:21.847 00:09:21.847 real 0m24.296s 00:09:21.847 user 1m23.832s 00:09:21.847 sys 0m7.813s 00:09:21.847 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.847 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.847 ************************************ 00:09:21.847 END TEST nvmf_fio_target 00:09:21.847 ************************************ 00:09:21.847 03:57:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:21.847 03:57:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:21.847 03:57:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.847 03:57:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.847 ************************************ 00:09:21.847 START TEST nvmf_bdevio 00:09:21.847 ************************************ 00:09:21.847 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:21.847 * Looking for test storage... 00:09:21.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.847 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:21.847 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:21.847 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:21.847 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:21.847 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.847 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.847 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:21.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.848 --rc genhtml_branch_coverage=1 00:09:21.848 --rc genhtml_function_coverage=1 00:09:21.848 --rc genhtml_legend=1 00:09:21.848 --rc geninfo_all_blocks=1 00:09:21.848 --rc geninfo_unexecuted_blocks=1 00:09:21.848 00:09:21.848 ' 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:21.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.848 --rc genhtml_branch_coverage=1 00:09:21.848 --rc genhtml_function_coverage=1 00:09:21.848 --rc genhtml_legend=1 00:09:21.848 --rc geninfo_all_blocks=1 00:09:21.848 --rc geninfo_unexecuted_blocks=1 00:09:21.848 00:09:21.848 ' 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:21.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.848 --rc genhtml_branch_coverage=1 00:09:21.848 --rc genhtml_function_coverage=1 00:09:21.848 --rc genhtml_legend=1 00:09:21.848 --rc geninfo_all_blocks=1 00:09:21.848 --rc geninfo_unexecuted_blocks=1 00:09:21.848 00:09:21.848 ' 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:21.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.848 --rc genhtml_branch_coverage=1 00:09:21.848 --rc genhtml_function_coverage=1 00:09:21.848 --rc genhtml_legend=1 00:09:21.848 --rc geninfo_all_blocks=1 00:09:21.848 --rc geninfo_unexecuted_blocks=1 00:09:21.848 00:09:21.848 ' 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.848 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.754 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:23.755 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:23.755 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:23.755 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:23.755 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.755 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:09:24.014 00:09:24.014 --- 10.0.0.2 ping statistics --- 00:09:24.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.014 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:09:24.014 00:09:24.014 --- 10.0.0.1 ping statistics --- 00:09:24.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.014 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.014 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.272 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2324697 00:09:24.272 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:24.272 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2324697 00:09:24.272 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2324697 ']' 00:09:24.272 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.272 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.272 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.272 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.272 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.272 [2024-12-10 03:57:18.443682] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:24.272 [2024-12-10 03:57:18.443765] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.272 [2024-12-10 03:57:18.512840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.272 [2024-12-10 03:57:18.567225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.272 [2024-12-10 03:57:18.567287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.272 [2024-12-10 03:57:18.567317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.272 [2024-12-10 03:57:18.567328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.272 [2024-12-10 03:57:18.567339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.272 [2024-12-10 03:57:18.569020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:24.272 [2024-12-10 03:57:18.569084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:24.272 [2024-12-10 03:57:18.569148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:24.272 [2024-12-10 03:57:18.569152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.530 [2024-12-10 03:57:18.763512] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.530 Malloc0 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.530 [2024-12-10 03:57:18.827998] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:24.530 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:24.530 { 00:09:24.530 "params": { 00:09:24.530 "name": "Nvme$subsystem", 00:09:24.530 "trtype": "$TEST_TRANSPORT", 00:09:24.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.530 "adrfam": "ipv4", 00:09:24.530 "trsvcid": "$NVMF_PORT", 00:09:24.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.530 "hdgst": ${hdgst:-false}, 00:09:24.530 "ddgst": ${ddgst:-false} 00:09:24.530 }, 00:09:24.531 "method": "bdev_nvme_attach_controller" 00:09:24.531 } 00:09:24.531 EOF 00:09:24.531 )") 00:09:24.531 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:24.531 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:24.531 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:24.531 03:57:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:24.531 "params": { 00:09:24.531 "name": "Nvme1", 00:09:24.531 "trtype": "tcp", 00:09:24.531 "traddr": "10.0.0.2", 00:09:24.531 "adrfam": "ipv4", 00:09:24.531 "trsvcid": "4420", 00:09:24.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.531 "hdgst": false, 00:09:24.531 "ddgst": false 00:09:24.531 }, 00:09:24.531 "method": "bdev_nvme_attach_controller" 00:09:24.531 }' 00:09:24.531 [2024-12-10 03:57:18.879774] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:24.531 [2024-12-10 03:57:18.879861] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2324723 ] 00:09:24.789 [2024-12-10 03:57:18.952524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:24.789 [2024-12-10 03:57:19.017137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.789 [2024-12-10 03:57:19.017193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.789 [2024-12-10 03:57:19.017197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.051 I/O targets: 00:09:25.051 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:25.051 00:09:25.051 00:09:25.051 CUnit - A unit testing framework for C - Version 2.1-3 00:09:25.051 http://cunit.sourceforge.net/ 00:09:25.051 00:09:25.051 00:09:25.051 Suite: bdevio tests on: Nvme1n1 00:09:25.051 Test: blockdev write read block ...passed 00:09:25.051 Test: blockdev write zeroes read block ...passed 00:09:25.051 Test: blockdev write zeroes read no split ...passed 00:09:25.051 Test: blockdev write zeroes read split ...passed 00:09:25.051 Test: blockdev write zeroes read split partial ...passed 00:09:25.051 Test: blockdev reset ...[2024-12-10 03:57:19.326271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:25.051 [2024-12-10 03:57:19.326380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab98c0 (9): Bad file descriptor 00:09:25.051 [2024-12-10 03:57:19.429761] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:25.051 passed 00:09:25.051 Test: blockdev write read 8 blocks ...passed 00:09:25.051 Test: blockdev write read size > 128k ...passed 00:09:25.051 Test: blockdev write read invalid size ...passed 00:09:25.348 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:25.348 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:25.348 Test: blockdev write read max offset ...passed 00:09:25.348 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:25.348 Test: blockdev writev readv 8 blocks ...passed 00:09:25.348 Test: blockdev writev readv 30 x 1block ...passed 00:09:25.348 Test: blockdev writev readv block ...passed 00:09:25.348 Test: blockdev writev readv size > 128k ...passed 00:09:25.348 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:25.348 Test: blockdev comparev and writev ...[2024-12-10 03:57:19.687300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:25.348 [2024-12-10 03:57:19.687340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:25.348 [2024-12-10 03:57:19.687366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:25.348 [2024-12-10 03:57:19.687384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:25.348 [2024-12-10 03:57:19.687709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:25.348 [2024-12-10 03:57:19.687734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:25.348 [2024-12-10 03:57:19.687757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:25.348 [2024-12-10 03:57:19.687786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:25.348 [2024-12-10 03:57:19.688109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:25.348 [2024-12-10 03:57:19.688133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:25.348 [2024-12-10 03:57:19.688156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:25.348 [2024-12-10 03:57:19.688172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:25.348 [2024-12-10 03:57:19.688481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:25.348 [2024-12-10 03:57:19.688505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:25.348 [2024-12-10 03:57:19.688526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:25.348 [2024-12-10 03:57:19.688550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:25.631 passed 00:09:25.631 Test: blockdev nvme passthru rw ...passed 00:09:25.631 Test: blockdev nvme passthru vendor specific ...[2024-12-10 03:57:19.771790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:25.631 [2024-12-10 03:57:19.771817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:25.631 [2024-12-10 03:57:19.771972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:25.631 [2024-12-10 03:57:19.771995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:25.631 [2024-12-10 03:57:19.772141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:25.631 [2024-12-10 03:57:19.772164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:25.631 [2024-12-10 03:57:19.772309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:25.632 [2024-12-10 03:57:19.772332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:25.632 passed 00:09:25.632 Test: blockdev nvme admin passthru ...passed 00:09:25.632 Test: blockdev copy ...passed 00:09:25.632 00:09:25.632 Run Summary: Type Total Ran Passed Failed Inactive 00:09:25.632 suites 1 1 n/a 0 0 00:09:25.632 tests 23 23 23 0 0 00:09:25.632 asserts 152 152 152 0 n/a 00:09:25.632 00:09:25.632 Elapsed time = 1.234 seconds 00:09:25.889 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:25.889 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.889 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:25.889 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.889 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.890 rmmod nvme_tcp 00:09:25.890 rmmod nvme_fabrics 00:09:25.890 rmmod nvme_keyring 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2324697 ']' 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2324697 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2324697 ']' 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2324697 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2324697 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2324697' 00:09:25.890 killing process with pid 2324697 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2324697 00:09:25.890 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2324697 00:09:26.147 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.147 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.147 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.148 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:26.148 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:26.148 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.148 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.148 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.148 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:26.148 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.148 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.148 03:57:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.051 03:57:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:28.051 00:09:28.051 real 0m6.691s 00:09:28.051 user 0m10.331s 00:09:28.051 sys 0m2.247s 00:09:28.051 03:57:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.051 03:57:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:28.310 ************************************ 00:09:28.310 END TEST nvmf_bdevio 00:09:28.310 ************************************ 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:28.310 00:09:28.310 real 3m55.640s 00:09:28.310 user 10m11.008s 00:09:28.310 sys 1m8.359s 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.310 ************************************ 00:09:28.310 END TEST nvmf_target_core 00:09:28.310 ************************************ 00:09:28.310 03:57:22 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:28.310 03:57:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:28.310 03:57:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.310 03:57:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.310 ************************************ 00:09:28.310 START TEST nvmf_target_extra 00:09:28.310 ************************************ 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:28.310 * Looking for test storage... 00:09:28.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.310 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:28.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.311 --rc genhtml_branch_coverage=1 00:09:28.311 --rc genhtml_function_coverage=1 00:09:28.311 --rc genhtml_legend=1 00:09:28.311 --rc geninfo_all_blocks=1 00:09:28.311 --rc geninfo_unexecuted_blocks=1 00:09:28.311 00:09:28.311 ' 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:28.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.311 --rc genhtml_branch_coverage=1 00:09:28.311 --rc genhtml_function_coverage=1 00:09:28.311 --rc genhtml_legend=1 00:09:28.311 --rc geninfo_all_blocks=1 00:09:28.311 --rc geninfo_unexecuted_blocks=1 00:09:28.311 00:09:28.311 ' 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:28.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.311 --rc genhtml_branch_coverage=1 00:09:28.311 --rc genhtml_function_coverage=1 00:09:28.311 --rc genhtml_legend=1 00:09:28.311 --rc geninfo_all_blocks=1 00:09:28.311 --rc geninfo_unexecuted_blocks=1 00:09:28.311 00:09:28.311 ' 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:28.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.311 --rc genhtml_branch_coverage=1 00:09:28.311 --rc genhtml_function_coverage=1 00:09:28.311 --rc genhtml_legend=1 00:09:28.311 --rc geninfo_all_blocks=1 00:09:28.311 --rc geninfo_unexecuted_blocks=1 00:09:28.311 00:09:28.311 ' 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.311 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.570 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.570 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.570 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.570 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.570 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.570 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:28.570 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:28.570 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:28.570 03:57:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:28.570 03:57:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:28.570 03:57:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.570 03:57:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:28.570 ************************************ 00:09:28.570 START TEST nvmf_example 00:09:28.570 ************************************ 00:09:28.570 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:28.570 * Looking for test storage... 00:09:28.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:28.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.571 --rc genhtml_branch_coverage=1 00:09:28.571 --rc genhtml_function_coverage=1 00:09:28.571 --rc genhtml_legend=1 00:09:28.571 --rc geninfo_all_blocks=1 00:09:28.571 --rc geninfo_unexecuted_blocks=1 00:09:28.571 00:09:28.571 ' 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:28.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.571 --rc genhtml_branch_coverage=1 00:09:28.571 --rc genhtml_function_coverage=1 00:09:28.571 --rc genhtml_legend=1 00:09:28.571 --rc geninfo_all_blocks=1 00:09:28.571 --rc geninfo_unexecuted_blocks=1 00:09:28.571 00:09:28.571 ' 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:28.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.571 --rc genhtml_branch_coverage=1 00:09:28.571 --rc genhtml_function_coverage=1 00:09:28.571 --rc genhtml_legend=1 00:09:28.571 --rc geninfo_all_blocks=1 00:09:28.571 --rc geninfo_unexecuted_blocks=1 00:09:28.571 00:09:28.571 ' 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:28.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.571 --rc genhtml_branch_coverage=1 00:09:28.571 --rc genhtml_function_coverage=1 00:09:28.571 --rc genhtml_legend=1 00:09:28.571 --rc geninfo_all_blocks=1 00:09:28.571 --rc geninfo_unexecuted_blocks=1 00:09:28.571 00:09:28.571 ' 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:28.571 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:28.572 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:28.572 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:28.572 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:28.572 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:28.572 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:28.572 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.572 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:28.572 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:28.572 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:28.572 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.572 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.572 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.572 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:28.572 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:28.572 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:28.572 03:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:31.109 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:31.109 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:31.109 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:31.109 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:31.109 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:31.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:09:31.110 00:09:31.110 --- 10.0.0.2 ping statistics --- 00:09:31.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.110 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:09:31.110 00:09:31.110 --- 10.0.0.1 ping statistics --- 00:09:31.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.110 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2326993 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2326993 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2326993 ']' 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.110 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.042 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.042 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:32.042 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:32.042 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:32.042 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:32.300 03:57:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:44.497 Initializing NVMe Controllers 00:09:44.497 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:44.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:44.497 Initialization complete. Launching workers. 00:09:44.497 ======================================================== 00:09:44.497 Latency(us) 00:09:44.497 Device Information : IOPS MiB/s Average min max 00:09:44.497 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14681.90 57.35 4359.51 784.75 16428.74 00:09:44.497 ======================================================== 00:09:44.497 Total : 14681.90 57.35 4359.51 784.75 16428.74 00:09:44.497 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:44.497 rmmod nvme_tcp 00:09:44.497 rmmod nvme_fabrics 00:09:44.497 rmmod nvme_keyring 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2326993 ']' 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2326993 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2326993 ']' 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2326993 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2326993 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2326993' 00:09:44.497 killing process with pid 2326993 00:09:44.497 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2326993 00:09:44.498 03:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2326993 00:09:44.498 nvmf threads initialize successfully 00:09:44.498 bdev subsystem init successfully 00:09:44.498 created a nvmf target service 00:09:44.498 create targets's poll groups done 00:09:44.498 all subsystems of target started 00:09:44.498 nvmf target is running 00:09:44.498 all subsystems of target stopped 00:09:44.498 destroy targets's poll groups done 00:09:44.498 destroyed the nvmf target service 00:09:44.498 bdev subsystem finish successfully 00:09:44.498 nvmf threads destroy successfully 00:09:44.498 03:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:44.498 03:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:44.498 03:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:44.498 03:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:44.498 03:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:44.498 03:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:44.498 03:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:44.498 03:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.498 03:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:44.498 03:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.498 03:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.498 03:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.757 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:44.757 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:44.757 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.757 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.757 00:09:44.757 real 0m16.385s 00:09:44.757 user 0m45.719s 00:09:44.757 sys 0m3.380s 00:09:44.757 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.757 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.757 ************************************ 00:09:44.757 END TEST nvmf_example 00:09:44.757 ************************************ 00:09:44.757 03:57:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:44.757 03:57:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.757 03:57:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.757 03:57:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:45.017 ************************************ 00:09:45.017 START TEST nvmf_filesystem 00:09:45.017 ************************************ 00:09:45.017 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:45.017 * Looking for test storage... 00:09:45.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.017 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.017 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.017 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.017 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.018 --rc genhtml_branch_coverage=1 00:09:45.018 --rc genhtml_function_coverage=1 00:09:45.018 --rc genhtml_legend=1 00:09:45.018 --rc geninfo_all_blocks=1 00:09:45.018 --rc geninfo_unexecuted_blocks=1 00:09:45.018 00:09:45.018 ' 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.018 --rc genhtml_branch_coverage=1 00:09:45.018 --rc genhtml_function_coverage=1 00:09:45.018 --rc genhtml_legend=1 00:09:45.018 --rc geninfo_all_blocks=1 00:09:45.018 --rc geninfo_unexecuted_blocks=1 00:09:45.018 00:09:45.018 ' 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.018 --rc genhtml_branch_coverage=1 00:09:45.018 --rc genhtml_function_coverage=1 00:09:45.018 --rc genhtml_legend=1 00:09:45.018 --rc geninfo_all_blocks=1 00:09:45.018 --rc geninfo_unexecuted_blocks=1 00:09:45.018 00:09:45.018 ' 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.018 --rc genhtml_branch_coverage=1 00:09:45.018 --rc genhtml_function_coverage=1 00:09:45.018 --rc genhtml_legend=1 00:09:45.018 --rc geninfo_all_blocks=1 00:09:45.018 --rc geninfo_unexecuted_blocks=1 00:09:45.018 00:09:45.018 ' 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:45.018 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:45.019 #define SPDK_CONFIG_H 00:09:45.019 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:45.019 #define SPDK_CONFIG_APPS 1 00:09:45.019 #define SPDK_CONFIG_ARCH native 00:09:45.019 #undef SPDK_CONFIG_ASAN 00:09:45.019 #undef SPDK_CONFIG_AVAHI 00:09:45.019 #undef SPDK_CONFIG_CET 00:09:45.019 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:45.019 #define SPDK_CONFIG_COVERAGE 1 00:09:45.019 #define SPDK_CONFIG_CROSS_PREFIX 00:09:45.019 #undef SPDK_CONFIG_CRYPTO 00:09:45.019 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:45.019 #undef SPDK_CONFIG_CUSTOMOCF 00:09:45.019 #undef SPDK_CONFIG_DAOS 00:09:45.019 #define SPDK_CONFIG_DAOS_DIR 00:09:45.019 #define SPDK_CONFIG_DEBUG 1 00:09:45.019 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:45.019 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:45.019 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:45.019 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:45.019 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:45.019 #undef SPDK_CONFIG_DPDK_UADK 00:09:45.019 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:45.019 #define SPDK_CONFIG_EXAMPLES 1 00:09:45.019 #undef SPDK_CONFIG_FC 00:09:45.019 #define SPDK_CONFIG_FC_PATH 00:09:45.019 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:45.019 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:45.019 #define SPDK_CONFIG_FSDEV 1 00:09:45.019 #undef SPDK_CONFIG_FUSE 00:09:45.019 #undef SPDK_CONFIG_FUZZER 00:09:45.019 #define SPDK_CONFIG_FUZZER_LIB 00:09:45.019 #undef SPDK_CONFIG_GOLANG 00:09:45.019 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:45.019 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:45.019 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:45.019 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:45.019 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:45.019 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:45.019 #undef SPDK_CONFIG_HAVE_LZ4 00:09:45.019 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:45.019 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:45.019 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:45.019 #define SPDK_CONFIG_IDXD 1 00:09:45.019 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:45.019 #undef SPDK_CONFIG_IPSEC_MB 00:09:45.019 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:45.019 #define SPDK_CONFIG_ISAL 1 00:09:45.019 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:45.019 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:45.019 #define SPDK_CONFIG_LIBDIR 00:09:45.019 #undef SPDK_CONFIG_LTO 00:09:45.019 #define SPDK_CONFIG_MAX_LCORES 128 00:09:45.019 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:45.019 #define SPDK_CONFIG_NVME_CUSE 1 00:09:45.019 #undef SPDK_CONFIG_OCF 00:09:45.019 #define SPDK_CONFIG_OCF_PATH 00:09:45.019 #define SPDK_CONFIG_OPENSSL_PATH 00:09:45.019 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:45.019 #define SPDK_CONFIG_PGO_DIR 00:09:45.019 #undef SPDK_CONFIG_PGO_USE 00:09:45.019 #define SPDK_CONFIG_PREFIX /usr/local 00:09:45.019 #undef SPDK_CONFIG_RAID5F 00:09:45.019 #undef SPDK_CONFIG_RBD 00:09:45.019 #define SPDK_CONFIG_RDMA 1 00:09:45.019 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:45.019 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:45.019 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:45.019 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:45.019 #define SPDK_CONFIG_SHARED 1 00:09:45.019 #undef SPDK_CONFIG_SMA 00:09:45.019 #define SPDK_CONFIG_TESTS 1 00:09:45.019 #undef SPDK_CONFIG_TSAN 00:09:45.019 #define SPDK_CONFIG_UBLK 1 00:09:45.019 #define SPDK_CONFIG_UBSAN 1 00:09:45.019 #undef SPDK_CONFIG_UNIT_TESTS 00:09:45.019 #undef SPDK_CONFIG_URING 00:09:45.019 #define SPDK_CONFIG_URING_PATH 00:09:45.019 #undef SPDK_CONFIG_URING_ZNS 00:09:45.019 #undef SPDK_CONFIG_USDT 00:09:45.019 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:45.019 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:45.019 #define SPDK_CONFIG_VFIO_USER 1 00:09:45.019 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:45.019 #define SPDK_CONFIG_VHOST 1 00:09:45.019 #define SPDK_CONFIG_VIRTIO 1 00:09:45.019 #undef SPDK_CONFIG_VTUNE 00:09:45.019 #define SPDK_CONFIG_VTUNE_DIR 00:09:45.019 #define SPDK_CONFIG_WERROR 1 00:09:45.019 #define SPDK_CONFIG_WPDK_DIR 00:09:45.019 #undef SPDK_CONFIG_XNVME 00:09:45.019 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.019 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:45.020 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:45.021 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2328693 ]] 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2328693 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.m1yW1W 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.m1yW1W/tests/target /tmp/spdk.m1yW1W 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=60021325824 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67273338880 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7252013056 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=33626636288 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=33636667392 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13432246272 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=13454667776 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22421504 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=33636249600 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=33636671488 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=421888 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6727319552 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6727331840 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:45.022 * Looking for test storage... 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=60021325824 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9466605568 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:45.022 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:45.023 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:45.023 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:45.023 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:45.023 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:45.023 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.023 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.023 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.281 --rc genhtml_branch_coverage=1 00:09:45.281 --rc genhtml_function_coverage=1 00:09:45.281 --rc genhtml_legend=1 00:09:45.281 --rc geninfo_all_blocks=1 00:09:45.281 --rc geninfo_unexecuted_blocks=1 00:09:45.281 00:09:45.281 ' 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.281 --rc genhtml_branch_coverage=1 00:09:45.281 --rc genhtml_function_coverage=1 00:09:45.281 --rc genhtml_legend=1 00:09:45.281 --rc geninfo_all_blocks=1 00:09:45.281 --rc geninfo_unexecuted_blocks=1 00:09:45.281 00:09:45.281 ' 00:09:45.281 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.281 --rc genhtml_branch_coverage=1 00:09:45.281 --rc genhtml_function_coverage=1 00:09:45.281 --rc genhtml_legend=1 00:09:45.282 --rc geninfo_all_blocks=1 00:09:45.282 --rc geninfo_unexecuted_blocks=1 00:09:45.282 00:09:45.282 ' 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.282 --rc genhtml_branch_coverage=1 00:09:45.282 --rc genhtml_function_coverage=1 00:09:45.282 --rc genhtml_legend=1 00:09:45.282 --rc geninfo_all_blocks=1 00:09:45.282 --rc geninfo_unexecuted_blocks=1 00:09:45.282 00:09:45.282 ' 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:45.282 03:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:47.817 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:47.817 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:47.817 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:47.817 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:47.817 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:47.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:09:47.818 00:09:47.818 --- 10.0.0.2 ping statistics --- 00:09:47.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.818 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:09:47.818 00:09:47.818 --- 10.0.0.1 ping statistics --- 00:09:47.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.818 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.818 ************************************ 00:09:47.818 START TEST nvmf_filesystem_no_in_capsule 00:09:47.818 ************************************ 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2330341 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2330341 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2330341 ']' 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.818 03:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.818 [2024-12-10 03:57:41.905286] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:47.818 [2024-12-10 03:57:41.905365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.818 [2024-12-10 03:57:41.979783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.818 [2024-12-10 03:57:42.038948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.818 [2024-12-10 03:57:42.039008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.818 [2024-12-10 03:57:42.039037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.818 [2024-12-10 03:57:42.039048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.818 [2024-12-10 03:57:42.039058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.818 [2024-12-10 03:57:42.040574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.818 [2024-12-10 03:57:42.040603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.818 [2024-12-10 03:57:42.040670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.818 [2024-12-10 03:57:42.040674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.818 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.818 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:47.818 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:47.818 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:47.818 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.818 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.818 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:47.818 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:47.818 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.818 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.818 [2024-12-10 03:57:42.188212] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.818 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.818 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:47.818 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.818 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:48.077 Malloc1 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:48.077 [2024-12-10 03:57:42.393082] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:48.077 { 00:09:48.077 "name": "Malloc1", 00:09:48.077 "aliases": [ 00:09:48.077 "de35fc0a-6f66-4f35-8b9b-575bb82c02df" 00:09:48.077 ], 00:09:48.077 "product_name": "Malloc disk", 00:09:48.077 "block_size": 512, 00:09:48.077 "num_blocks": 1048576, 00:09:48.077 "uuid": "de35fc0a-6f66-4f35-8b9b-575bb82c02df", 00:09:48.077 "assigned_rate_limits": { 00:09:48.077 "rw_ios_per_sec": 0, 00:09:48.077 "rw_mbytes_per_sec": 0, 00:09:48.077 "r_mbytes_per_sec": 0, 00:09:48.077 "w_mbytes_per_sec": 0 00:09:48.077 }, 00:09:48.077 "claimed": true, 00:09:48.077 "claim_type": "exclusive_write", 00:09:48.077 "zoned": false, 00:09:48.077 "supported_io_types": { 00:09:48.077 "read": true, 00:09:48.077 "write": true, 00:09:48.077 "unmap": true, 00:09:48.077 "flush": true, 00:09:48.077 "reset": true, 00:09:48.077 "nvme_admin": false, 00:09:48.077 "nvme_io": false, 00:09:48.077 "nvme_io_md": false, 00:09:48.077 "write_zeroes": true, 00:09:48.077 "zcopy": true, 00:09:48.077 "get_zone_info": false, 00:09:48.077 "zone_management": false, 00:09:48.077 "zone_append": false, 00:09:48.077 "compare": false, 00:09:48.077 "compare_and_write": false, 00:09:48.077 "abort": true, 00:09:48.077 "seek_hole": false, 00:09:48.077 "seek_data": false, 00:09:48.077 "copy": true, 00:09:48.077 "nvme_iov_md": false 00:09:48.077 }, 00:09:48.077 "memory_domains": [ 00:09:48.077 { 00:09:48.077 "dma_device_id": "system", 00:09:48.077 "dma_device_type": 1 00:09:48.077 }, 00:09:48.077 { 00:09:48.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.077 "dma_device_type": 2 00:09:48.077 } 00:09:48.077 ], 00:09:48.077 "driver_specific": {} 00:09:48.077 } 00:09:48.077 ]' 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:48.077 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:48.334 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:48.334 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:48.334 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:48.334 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:48.334 03:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:48.899 03:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:48.899 03:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:48.899 03:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:48.899 03:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:48.899 03:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:50.795 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:51.052 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:51.309 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:52.680 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:52.680 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:52.680 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:52.680 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.680 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.680 ************************************ 00:09:52.680 START TEST filesystem_ext4 00:09:52.680 ************************************ 00:09:52.680 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:52.680 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:52.680 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:52.680 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:52.680 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:52.680 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:52.680 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:52.680 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:52.680 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:52.680 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:52.680 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:52.680 mke2fs 1.47.0 (5-Feb-2023) 00:09:52.680 Discarding device blocks: 0/522240 done 00:09:52.680 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:52.680 Filesystem UUID: ee4233b6-6dcf-4f06-8795-e6e287e14e89 00:09:52.680 Superblock backups stored on blocks: 00:09:52.680 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:52.680 00:09:52.680 Allocating group tables: 0/64 done 00:09:52.680 Writing inode tables: 0/64 done 00:09:54.053 Creating journal (8192 blocks): done 00:09:56.248 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:09:56.248 00:09:56.248 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:56.248 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2330341 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:02.800 00:10:02.800 real 0m9.898s 00:10:02.800 user 0m0.017s 00:10:02.800 sys 0m0.071s 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:02.800 ************************************ 00:10:02.800 END TEST filesystem_ext4 00:10:02.800 ************************************ 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:02.800 ************************************ 00:10:02.800 START TEST filesystem_btrfs 00:10:02.800 ************************************ 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:02.800 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:02.800 btrfs-progs v6.8.1 00:10:02.800 See https://btrfs.readthedocs.io for more information. 00:10:02.800 00:10:02.800 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:02.800 NOTE: several default settings have changed in version 5.15, please make sure 00:10:02.800 this does not affect your deployments: 00:10:02.800 - DUP for metadata (-m dup) 00:10:02.800 - enabled no-holes (-O no-holes) 00:10:02.800 - enabled free-space-tree (-R free-space-tree) 00:10:02.800 00:10:02.800 Label: (null) 00:10:02.800 UUID: 9bbb63b8-dd63-491b-9460-7172b53a1433 00:10:02.800 Node size: 16384 00:10:02.800 Sector size: 4096 (CPU page size: 4096) 00:10:02.800 Filesystem size: 510.00MiB 00:10:02.800 Block group profiles: 00:10:02.800 Data: single 8.00MiB 00:10:02.800 Metadata: DUP 32.00MiB 00:10:02.800 System: DUP 8.00MiB 00:10:02.800 SSD detected: yes 00:10:02.800 Zoned device: no 00:10:02.800 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:02.800 Checksum: crc32c 00:10:02.800 Number of devices: 1 00:10:02.800 Devices: 00:10:02.800 ID SIZE PATH 00:10:02.800 1 510.00MiB /dev/nvme0n1p1 00:10:02.800 00:10:02.800 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:02.800 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2330341 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:03.058 00:10:03.058 real 0m0.725s 00:10:03.058 user 0m0.023s 00:10:03.058 sys 0m0.099s 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:03.058 ************************************ 00:10:03.058 END TEST filesystem_btrfs 00:10:03.058 ************************************ 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:03.058 ************************************ 00:10:03.058 START TEST filesystem_xfs 00:10:03.058 ************************************ 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:03.058 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:03.316 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:03.316 = sectsz=512 attr=2, projid32bit=1 00:10:03.316 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:03.316 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:03.316 data = bsize=4096 blocks=130560, imaxpct=25 00:10:03.316 = sunit=0 swidth=0 blks 00:10:03.316 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:03.316 log =internal log bsize=4096 blocks=16384, version=2 00:10:03.316 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:03.316 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:04.248 Discarding blocks...Done. 00:10:04.248 03:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:04.248 03:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:06.825 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:06.825 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:06.825 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:06.825 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:06.825 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:06.825 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:06.825 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2330341 00:10:06.825 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:06.825 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:06.825 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:06.825 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:06.825 00:10:06.825 real 0m3.405s 00:10:06.825 user 0m0.018s 00:10:06.825 sys 0m0.059s 00:10:06.825 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.825 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:06.825 ************************************ 00:10:06.825 END TEST filesystem_xfs 00:10:06.825 ************************************ 00:10:06.825 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:06.825 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:06.825 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:06.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.825 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:06.825 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:06.825 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:06.825 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.825 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:06.825 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.082 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:07.083 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.083 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.083 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.083 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.083 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:07.083 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2330341 00:10:07.083 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2330341 ']' 00:10:07.083 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2330341 00:10:07.083 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:07.083 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.083 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2330341 00:10:07.083 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.083 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.083 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2330341' 00:10:07.083 killing process with pid 2330341 00:10:07.083 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2330341 00:10:07.083 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2330341 00:10:07.343 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:07.343 00:10:07.343 real 0m19.847s 00:10:07.343 user 1m16.980s 00:10:07.343 sys 0m2.288s 00:10:07.343 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.343 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.343 ************************************ 00:10:07.343 END TEST nvmf_filesystem_no_in_capsule 00:10:07.343 ************************************ 00:10:07.343 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:07.343 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.343 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.343 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:07.602 ************************************ 00:10:07.602 START TEST nvmf_filesystem_in_capsule 00:10:07.602 ************************************ 00:10:07.602 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:07.602 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:07.602 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:07.602 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.602 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:07.602 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.602 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2332963 00:10:07.602 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:07.602 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2332963 00:10:07.602 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2332963 ']' 00:10:07.602 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.602 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.602 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.602 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.602 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.602 [2024-12-10 03:58:01.807083] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:10:07.602 [2024-12-10 03:58:01.807168] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.602 [2024-12-10 03:58:01.877759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.602 [2024-12-10 03:58:01.935265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.602 [2024-12-10 03:58:01.935333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.602 [2024-12-10 03:58:01.935347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.602 [2024-12-10 03:58:01.935358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.602 [2024-12-10 03:58:01.935367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.602 [2024-12-10 03:58:01.936913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.602 [2024-12-10 03:58:01.936971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.602 [2024-12-10 03:58:01.937040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.602 [2024-12-10 03:58:01.937043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.860 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.860 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:07.860 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.860 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.860 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.861 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.861 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:07.861 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:07.861 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.861 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.861 [2024-12-10 03:58:02.086582] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.861 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.861 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:07.861 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.861 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.119 Malloc1 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.119 [2024-12-10 03:58:02.298497] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:08.119 { 00:10:08.119 "name": "Malloc1", 00:10:08.119 "aliases": [ 00:10:08.119 "2deb82c6-505c-41e7-bb20-8a217c07dcab" 00:10:08.119 ], 00:10:08.119 "product_name": "Malloc disk", 00:10:08.119 "block_size": 512, 00:10:08.119 "num_blocks": 1048576, 00:10:08.119 "uuid": "2deb82c6-505c-41e7-bb20-8a217c07dcab", 00:10:08.119 "assigned_rate_limits": { 00:10:08.119 "rw_ios_per_sec": 0, 00:10:08.119 "rw_mbytes_per_sec": 0, 00:10:08.119 "r_mbytes_per_sec": 0, 00:10:08.119 "w_mbytes_per_sec": 0 00:10:08.119 }, 00:10:08.119 "claimed": true, 00:10:08.119 "claim_type": "exclusive_write", 00:10:08.119 "zoned": false, 00:10:08.119 "supported_io_types": { 00:10:08.119 "read": true, 00:10:08.119 "write": true, 00:10:08.119 "unmap": true, 00:10:08.119 "flush": true, 00:10:08.119 "reset": true, 00:10:08.119 "nvme_admin": false, 00:10:08.119 "nvme_io": false, 00:10:08.119 "nvme_io_md": false, 00:10:08.119 "write_zeroes": true, 00:10:08.119 "zcopy": true, 00:10:08.119 "get_zone_info": false, 00:10:08.119 "zone_management": false, 00:10:08.119 "zone_append": false, 00:10:08.119 "compare": false, 00:10:08.119 "compare_and_write": false, 00:10:08.119 "abort": true, 00:10:08.119 "seek_hole": false, 00:10:08.119 "seek_data": false, 00:10:08.119 "copy": true, 00:10:08.119 "nvme_iov_md": false 00:10:08.119 }, 00:10:08.119 "memory_domains": [ 00:10:08.119 { 00:10:08.119 "dma_device_id": "system", 00:10:08.119 "dma_device_type": 1 00:10:08.119 }, 00:10:08.119 { 00:10:08.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.119 "dma_device_type": 2 00:10:08.119 } 00:10:08.119 ], 00:10:08.119 "driver_specific": {} 00:10:08.119 } 00:10:08.119 ]' 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:08.119 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:09.052 03:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:09.052 03:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:09.052 03:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:09.052 03:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:09.052 03:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:10.947 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:11.204 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:12.136 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:13.070 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:13.070 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:13.070 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:13.070 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.070 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.070 ************************************ 00:10:13.070 START TEST filesystem_in_capsule_ext4 00:10:13.070 ************************************ 00:10:13.070 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:13.070 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:13.070 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:13.070 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:13.070 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:13.070 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:13.070 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:13.070 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:13.070 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:13.070 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:13.070 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:13.070 mke2fs 1.47.0 (5-Feb-2023) 00:10:13.070 Discarding device blocks: 0/522240 done 00:10:13.070 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:13.070 Filesystem UUID: b191af0f-5de9-417b-8d62-6688ba861609 00:10:13.070 Superblock backups stored on blocks: 00:10:13.070 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:13.070 00:10:13.070 Allocating group tables: 0/64 done 00:10:13.070 Writing inode tables: 0/64 done 00:10:14.969 Creating journal (8192 blocks): done 00:10:16.357 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:10:16.357 00:10:16.357 03:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:16.357 03:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2332963 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:21.625 00:10:21.625 real 0m8.678s 00:10:21.625 user 0m0.020s 00:10:21.625 sys 0m0.061s 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:21.625 ************************************ 00:10:21.625 END TEST filesystem_in_capsule_ext4 00:10:21.625 ************************************ 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.625 ************************************ 00:10:21.625 START TEST filesystem_in_capsule_btrfs 00:10:21.625 ************************************ 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:21.625 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:21.626 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:21.887 btrfs-progs v6.8.1 00:10:21.887 See https://btrfs.readthedocs.io for more information. 00:10:21.887 00:10:21.887 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:21.887 NOTE: several default settings have changed in version 5.15, please make sure 00:10:21.887 this does not affect your deployments: 00:10:21.887 - DUP for metadata (-m dup) 00:10:21.887 - enabled no-holes (-O no-holes) 00:10:21.887 - enabled free-space-tree (-R free-space-tree) 00:10:21.887 00:10:21.887 Label: (null) 00:10:21.887 UUID: 184687a1-d13b-4d48-9b79-bd01a4a48ad6 00:10:21.887 Node size: 16384 00:10:21.887 Sector size: 4096 (CPU page size: 4096) 00:10:21.887 Filesystem size: 510.00MiB 00:10:21.887 Block group profiles: 00:10:21.887 Data: single 8.00MiB 00:10:21.887 Metadata: DUP 32.00MiB 00:10:21.887 System: DUP 8.00MiB 00:10:21.887 SSD detected: yes 00:10:21.887 Zoned device: no 00:10:21.887 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:21.887 Checksum: crc32c 00:10:21.887 Number of devices: 1 00:10:21.887 Devices: 00:10:21.887 ID SIZE PATH 00:10:21.887 1 510.00MiB /dev/nvme0n1p1 00:10:21.887 00:10:21.887 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:21.887 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2332963 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:22.146 00:10:22.146 real 0m0.543s 00:10:22.146 user 0m0.022s 00:10:22.146 sys 0m0.093s 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:22.146 ************************************ 00:10:22.146 END TEST filesystem_in_capsule_btrfs 00:10:22.146 ************************************ 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.146 ************************************ 00:10:22.146 START TEST filesystem_in_capsule_xfs 00:10:22.146 ************************************ 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:22.146 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:22.405 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:22.405 = sectsz=512 attr=2, projid32bit=1 00:10:22.405 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:22.405 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:22.405 data = bsize=4096 blocks=130560, imaxpct=25 00:10:22.405 = sunit=0 swidth=0 blks 00:10:22.405 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:22.405 log =internal log bsize=4096 blocks=16384, version=2 00:10:22.405 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:22.405 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:23.341 Discarding blocks...Done. 00:10:23.341 03:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:23.341 03:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:25.246 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:25.246 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:25.246 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:25.246 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:25.246 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:25.246 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:25.246 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2332963 00:10:25.246 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:25.246 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:25.246 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:25.246 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:25.246 00:10:25.246 real 0m2.998s 00:10:25.246 user 0m0.015s 00:10:25.246 sys 0m0.063s 00:10:25.246 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.246 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:25.246 ************************************ 00:10:25.246 END TEST filesystem_in_capsule_xfs 00:10:25.246 ************************************ 00:10:25.246 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:25.506 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:25.506 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:25.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.766 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:25.766 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:25.766 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:25.766 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:25.766 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:25.766 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:25.766 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:25.766 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:25.766 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.766 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.766 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.766 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:25.766 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2332963 00:10:25.766 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2332963 ']' 00:10:25.767 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2332963 00:10:25.767 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:25.767 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.767 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2332963 00:10:25.767 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.767 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.767 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2332963' 00:10:25.767 killing process with pid 2332963 00:10:25.767 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2332963 00:10:25.767 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2332963 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:26.336 00:10:26.336 real 0m18.677s 00:10:26.336 user 1m12.388s 00:10:26.336 sys 0m2.230s 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.336 ************************************ 00:10:26.336 END TEST nvmf_filesystem_in_capsule 00:10:26.336 ************************************ 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:26.336 rmmod nvme_tcp 00:10:26.336 rmmod nvme_fabrics 00:10:26.336 rmmod nvme_keyring 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.336 03:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.246 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:28.246 00:10:28.246 real 0m43.398s 00:10:28.246 user 2m30.486s 00:10:28.246 sys 0m6.291s 00:10:28.246 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.246 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:28.246 ************************************ 00:10:28.246 END TEST nvmf_filesystem 00:10:28.246 ************************************ 00:10:28.246 03:58:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:28.246 03:58:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:28.246 03:58:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.246 03:58:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:28.246 ************************************ 00:10:28.246 START TEST nvmf_target_discovery 00:10:28.246 ************************************ 00:10:28.246 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:28.505 * Looking for test storage... 00:10:28.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:28.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.505 --rc genhtml_branch_coverage=1 00:10:28.505 --rc genhtml_function_coverage=1 00:10:28.505 --rc genhtml_legend=1 00:10:28.505 --rc geninfo_all_blocks=1 00:10:28.505 --rc geninfo_unexecuted_blocks=1 00:10:28.505 00:10:28.505 ' 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:28.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.505 --rc genhtml_branch_coverage=1 00:10:28.505 --rc genhtml_function_coverage=1 00:10:28.505 --rc genhtml_legend=1 00:10:28.505 --rc geninfo_all_blocks=1 00:10:28.505 --rc geninfo_unexecuted_blocks=1 00:10:28.505 00:10:28.505 ' 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:28.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.505 --rc genhtml_branch_coverage=1 00:10:28.505 --rc genhtml_function_coverage=1 00:10:28.505 --rc genhtml_legend=1 00:10:28.505 --rc geninfo_all_blocks=1 00:10:28.505 --rc geninfo_unexecuted_blocks=1 00:10:28.505 00:10:28.505 ' 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:28.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.505 --rc genhtml_branch_coverage=1 00:10:28.505 --rc genhtml_function_coverage=1 00:10:28.505 --rc genhtml_legend=1 00:10:28.505 --rc geninfo_all_blocks=1 00:10:28.505 --rc geninfo_unexecuted_blocks=1 00:10:28.505 00:10:28.505 ' 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.505 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:28.506 03:58:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:31.041 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:31.041 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:31.041 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:31.041 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:31.042 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:31.042 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:31.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:10:31.042 00:10:31.042 --- 10.0.0.2 ping statistics --- 00:10:31.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.042 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:10:31.042 00:10:31.042 --- 10.0.0.1 ping statistics --- 00:10:31.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.042 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2337266 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2337266 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2337266 ']' 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.042 [2024-12-10 03:58:25.140743] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:10:31.042 [2024-12-10 03:58:25.140834] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.042 [2024-12-10 03:58:25.212097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.042 [2024-12-10 03:58:25.266408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.042 [2024-12-10 03:58:25.266465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.042 [2024-12-10 03:58:25.266493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.042 [2024-12-10 03:58:25.266505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.042 [2024-12-10 03:58:25.266514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.042 [2024-12-10 03:58:25.268109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.042 [2024-12-10 03:58:25.268167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.042 [2024-12-10 03:58:25.268282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.042 [2024-12-10 03:58:25.268287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.042 [2024-12-10 03:58:25.413977] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.042 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 Null1 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 [2024-12-10 03:58:25.470741] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 Null2 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 Null3 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 Null4 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:31.562 00:10:31.562 Discovery Log Number of Records 6, Generation counter 6 00:10:31.562 =====Discovery Log Entry 0====== 00:10:31.562 trtype: tcp 00:10:31.562 adrfam: ipv4 00:10:31.562 subtype: current discovery subsystem 00:10:31.562 treq: not required 00:10:31.562 portid: 0 00:10:31.562 trsvcid: 4420 00:10:31.562 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:31.562 traddr: 10.0.0.2 00:10:31.562 eflags: explicit discovery connections, duplicate discovery information 00:10:31.562 sectype: none 00:10:31.562 =====Discovery Log Entry 1====== 00:10:31.562 trtype: tcp 00:10:31.562 adrfam: ipv4 00:10:31.562 subtype: nvme subsystem 00:10:31.562 treq: not required 00:10:31.562 portid: 0 00:10:31.562 trsvcid: 4420 00:10:31.562 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:31.562 traddr: 10.0.0.2 00:10:31.562 eflags: none 00:10:31.562 sectype: none 00:10:31.562 =====Discovery Log Entry 2====== 00:10:31.562 trtype: tcp 00:10:31.562 adrfam: ipv4 00:10:31.562 subtype: nvme subsystem 00:10:31.562 treq: not required 00:10:31.562 portid: 0 00:10:31.562 trsvcid: 4420 00:10:31.562 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:31.562 traddr: 10.0.0.2 00:10:31.562 eflags: none 00:10:31.562 sectype: none 00:10:31.562 =====Discovery Log Entry 3====== 00:10:31.562 trtype: tcp 00:10:31.562 adrfam: ipv4 00:10:31.562 subtype: nvme subsystem 00:10:31.562 treq: not required 00:10:31.562 portid: 0 00:10:31.562 trsvcid: 4420 00:10:31.562 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:31.562 traddr: 10.0.0.2 00:10:31.562 eflags: none 00:10:31.562 sectype: none 00:10:31.562 =====Discovery Log Entry 4====== 00:10:31.562 trtype: tcp 00:10:31.562 adrfam: ipv4 00:10:31.562 subtype: nvme subsystem 00:10:31.562 treq: not required 00:10:31.562 portid: 0 00:10:31.562 trsvcid: 4420 00:10:31.562 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:31.562 traddr: 10.0.0.2 00:10:31.562 eflags: none 00:10:31.562 sectype: none 00:10:31.562 =====Discovery Log Entry 5====== 00:10:31.562 trtype: tcp 00:10:31.562 adrfam: ipv4 00:10:31.562 subtype: discovery subsystem referral 00:10:31.562 treq: not required 00:10:31.562 portid: 0 00:10:31.562 trsvcid: 4430 00:10:31.562 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:31.562 traddr: 10.0.0.2 00:10:31.562 eflags: none 00:10:31.562 sectype: none 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:31.562 Perform nvmf subsystem discovery via RPC 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.562 [ 00:10:31.562 { 00:10:31.562 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:31.562 "subtype": "Discovery", 00:10:31.562 "listen_addresses": [ 00:10:31.562 { 00:10:31.562 "trtype": "TCP", 00:10:31.562 "adrfam": "IPv4", 00:10:31.562 "traddr": "10.0.0.2", 00:10:31.562 "trsvcid": "4420" 00:10:31.562 } 00:10:31.562 ], 00:10:31.562 "allow_any_host": true, 00:10:31.562 "hosts": [] 00:10:31.562 }, 00:10:31.562 { 00:10:31.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:31.562 "subtype": "NVMe", 00:10:31.562 "listen_addresses": [ 00:10:31.562 { 00:10:31.562 "trtype": "TCP", 00:10:31.562 "adrfam": "IPv4", 00:10:31.562 "traddr": "10.0.0.2", 00:10:31.562 "trsvcid": "4420" 00:10:31.562 } 00:10:31.562 ], 00:10:31.562 "allow_any_host": true, 00:10:31.562 "hosts": [], 00:10:31.562 "serial_number": "SPDK00000000000001", 00:10:31.562 "model_number": "SPDK bdev Controller", 00:10:31.562 "max_namespaces": 32, 00:10:31.562 "min_cntlid": 1, 00:10:31.562 "max_cntlid": 65519, 00:10:31.562 "namespaces": [ 00:10:31.562 { 00:10:31.562 "nsid": 1, 00:10:31.562 "bdev_name": "Null1", 00:10:31.562 "name": "Null1", 00:10:31.562 "nguid": "766C15F0921442E98A0255DBD9026145", 00:10:31.562 "uuid": "766c15f0-9214-42e9-8a02-55dbd9026145" 00:10:31.562 } 00:10:31.562 ] 00:10:31.562 }, 00:10:31.562 { 00:10:31.562 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:31.562 "subtype": "NVMe", 00:10:31.562 "listen_addresses": [ 00:10:31.562 { 00:10:31.562 "trtype": "TCP", 00:10:31.562 "adrfam": "IPv4", 00:10:31.562 "traddr": "10.0.0.2", 00:10:31.562 "trsvcid": "4420" 00:10:31.562 } 00:10:31.562 ], 00:10:31.562 "allow_any_host": true, 00:10:31.562 "hosts": [], 00:10:31.562 "serial_number": "SPDK00000000000002", 00:10:31.562 "model_number": "SPDK bdev Controller", 00:10:31.562 "max_namespaces": 32, 00:10:31.562 "min_cntlid": 1, 00:10:31.562 "max_cntlid": 65519, 00:10:31.562 "namespaces": [ 00:10:31.562 { 00:10:31.562 "nsid": 1, 00:10:31.562 "bdev_name": "Null2", 00:10:31.562 "name": "Null2", 00:10:31.562 "nguid": "D3E0AAFCFB66440AB19400A24809CF01", 00:10:31.562 "uuid": "d3e0aafc-fb66-440a-b194-00a24809cf01" 00:10:31.562 } 00:10:31.562 ] 00:10:31.562 }, 00:10:31.562 { 00:10:31.562 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:31.562 "subtype": "NVMe", 00:10:31.562 "listen_addresses": [ 00:10:31.562 { 00:10:31.562 "trtype": "TCP", 00:10:31.562 "adrfam": "IPv4", 00:10:31.562 "traddr": "10.0.0.2", 00:10:31.562 "trsvcid": "4420" 00:10:31.562 } 00:10:31.562 ], 00:10:31.562 "allow_any_host": true, 00:10:31.562 "hosts": [], 00:10:31.562 "serial_number": "SPDK00000000000003", 00:10:31.562 "model_number": "SPDK bdev Controller", 00:10:31.562 "max_namespaces": 32, 00:10:31.562 "min_cntlid": 1, 00:10:31.562 "max_cntlid": 65519, 00:10:31.562 "namespaces": [ 00:10:31.562 { 00:10:31.562 "nsid": 1, 00:10:31.562 "bdev_name": "Null3", 00:10:31.562 "name": "Null3", 00:10:31.562 "nguid": "C34AAF0A0CAC43B7A918BC2CA419AA1C", 00:10:31.562 "uuid": "c34aaf0a-0cac-43b7-a918-bc2ca419aa1c" 00:10:31.562 } 00:10:31.562 ] 00:10:31.562 }, 00:10:31.562 { 00:10:31.562 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:31.562 "subtype": "NVMe", 00:10:31.562 "listen_addresses": [ 00:10:31.562 { 00:10:31.562 "trtype": "TCP", 00:10:31.562 "adrfam": "IPv4", 00:10:31.562 "traddr": "10.0.0.2", 00:10:31.562 "trsvcid": "4420" 00:10:31.562 } 00:10:31.562 ], 00:10:31.562 "allow_any_host": true, 00:10:31.562 "hosts": [], 00:10:31.562 "serial_number": "SPDK00000000000004", 00:10:31.562 "model_number": "SPDK bdev Controller", 00:10:31.562 "max_namespaces": 32, 00:10:31.562 "min_cntlid": 1, 00:10:31.562 "max_cntlid": 65519, 00:10:31.562 "namespaces": [ 00:10:31.562 { 00:10:31.562 "nsid": 1, 00:10:31.562 "bdev_name": "Null4", 00:10:31.562 "name": "Null4", 00:10:31.562 "nguid": "0A2CF8A9052D42F99B12515BD2F9B8E6", 00:10:31.562 "uuid": "0a2cf8a9-052d-42f9-9b12-515bd2f9b8e6" 00:10:31.562 } 00:10:31.562 ] 00:10:31.562 } 00:10:31.562 ] 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:31.562 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.563 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.563 rmmod nvme_tcp 00:10:31.822 rmmod nvme_fabrics 00:10:31.822 rmmod nvme_keyring 00:10:31.822 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.822 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:31.822 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:31.822 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2337266 ']' 00:10:31.822 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2337266 00:10:31.822 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2337266 ']' 00:10:31.822 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2337266 00:10:31.822 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:31.822 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.822 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2337266 00:10:31.822 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.822 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.822 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2337266' 00:10:31.822 killing process with pid 2337266 00:10:31.822 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2337266 00:10:31.822 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2337266 00:10:32.082 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:32.082 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:32.082 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:32.082 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:32.082 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:32.082 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:32.082 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:32.082 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.082 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:32.082 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.082 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.082 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.990 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:33.990 00:10:33.990 real 0m5.704s 00:10:33.990 user 0m4.886s 00:10:33.990 sys 0m1.972s 00:10:33.990 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.990 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.990 ************************************ 00:10:33.990 END TEST nvmf_target_discovery 00:10:33.990 ************************************ 00:10:33.990 03:58:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:33.990 03:58:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:33.990 03:58:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.990 03:58:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:33.990 ************************************ 00:10:33.990 START TEST nvmf_referrals 00:10:33.990 ************************************ 00:10:33.990 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:34.249 * Looking for test storage... 00:10:34.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:34.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.249 --rc genhtml_branch_coverage=1 00:10:34.249 --rc genhtml_function_coverage=1 00:10:34.249 --rc genhtml_legend=1 00:10:34.249 --rc geninfo_all_blocks=1 00:10:34.249 --rc geninfo_unexecuted_blocks=1 00:10:34.249 00:10:34.249 ' 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:34.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.249 --rc genhtml_branch_coverage=1 00:10:34.249 --rc genhtml_function_coverage=1 00:10:34.249 --rc genhtml_legend=1 00:10:34.249 --rc geninfo_all_blocks=1 00:10:34.249 --rc geninfo_unexecuted_blocks=1 00:10:34.249 00:10:34.249 ' 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:34.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.249 --rc genhtml_branch_coverage=1 00:10:34.249 --rc genhtml_function_coverage=1 00:10:34.249 --rc genhtml_legend=1 00:10:34.249 --rc geninfo_all_blocks=1 00:10:34.249 --rc geninfo_unexecuted_blocks=1 00:10:34.249 00:10:34.249 ' 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:34.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.249 --rc genhtml_branch_coverage=1 00:10:34.249 --rc genhtml_function_coverage=1 00:10:34.249 --rc genhtml_legend=1 00:10:34.249 --rc geninfo_all_blocks=1 00:10:34.249 --rc geninfo_unexecuted_blocks=1 00:10:34.249 00:10:34.249 ' 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.249 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:34.250 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:36.783 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:36.783 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.783 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:36.784 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:36.784 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:36.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:10:36.784 00:10:36.784 --- 10.0.0.2 ping statistics --- 00:10:36.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.784 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:10:36.784 00:10:36.784 --- 10.0.0.1 ping statistics --- 00:10:36.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.784 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2339359 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2339359 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2339359 ']' 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.784 03:58:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.784 [2024-12-10 03:58:30.923956] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:10:36.784 [2024-12-10 03:58:30.924046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.784 [2024-12-10 03:58:30.996641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.784 [2024-12-10 03:58:31.058557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.784 [2024-12-10 03:58:31.058608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.784 [2024-12-10 03:58:31.058623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.784 [2024-12-10 03:58:31.058635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.784 [2024-12-10 03:58:31.058646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.784 [2024-12-10 03:58:31.060233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.784 [2024-12-10 03:58:31.060303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.784 [2024-12-10 03:58:31.060373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.784 [2024-12-10 03:58:31.060369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.044 [2024-12-10 03:58:31.204337] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.044 [2024-12-10 03:58:31.233723] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:37.044 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:37.304 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:37.564 03:58:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:37.861 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:37.861 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:37.861 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:37.861 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:37.861 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:37.861 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.861 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:38.135 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.393 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:38.393 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:38.393 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:38.393 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:38.393 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:38.393 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:38.393 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:38.393 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:38.393 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:38.393 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:38.393 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:38.393 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:38.393 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:38.393 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:38.393 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:38.652 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:38.652 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:38.652 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:38.652 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:38.652 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:38.652 03:58:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:38.910 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:38.910 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:38.911 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.911 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:38.911 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.911 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:38.911 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:38.911 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.911 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:38.911 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.911 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:38.911 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:38.911 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:38.911 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:38.911 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:38.911 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:38.911 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:39.171 rmmod nvme_tcp 00:10:39.171 rmmod nvme_fabrics 00:10:39.171 rmmod nvme_keyring 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2339359 ']' 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2339359 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2339359 ']' 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2339359 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2339359 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2339359' 00:10:39.171 killing process with pid 2339359 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2339359 00:10:39.171 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2339359 00:10:39.432 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:39.432 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:39.432 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:39.432 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:39.432 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:39.432 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:39.432 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:39.432 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:39.432 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:39.432 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.432 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.432 03:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.348 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:41.348 00:10:41.348 real 0m7.328s 00:10:41.348 user 0m11.798s 00:10:41.348 sys 0m2.368s 00:10:41.348 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.348 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.348 ************************************ 00:10:41.348 END TEST nvmf_referrals 00:10:41.348 ************************************ 00:10:41.348 03:58:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:41.348 03:58:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:41.348 03:58:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.348 03:58:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:41.609 ************************************ 00:10:41.609 START TEST nvmf_connect_disconnect 00:10:41.609 ************************************ 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:41.609 * Looking for test storage... 00:10:41.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:41.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.609 --rc genhtml_branch_coverage=1 00:10:41.609 --rc genhtml_function_coverage=1 00:10:41.609 --rc genhtml_legend=1 00:10:41.609 --rc geninfo_all_blocks=1 00:10:41.609 --rc geninfo_unexecuted_blocks=1 00:10:41.609 00:10:41.609 ' 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:41.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.609 --rc genhtml_branch_coverage=1 00:10:41.609 --rc genhtml_function_coverage=1 00:10:41.609 --rc genhtml_legend=1 00:10:41.609 --rc geninfo_all_blocks=1 00:10:41.609 --rc geninfo_unexecuted_blocks=1 00:10:41.609 00:10:41.609 ' 00:10:41.609 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:41.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.609 --rc genhtml_branch_coverage=1 00:10:41.609 --rc genhtml_function_coverage=1 00:10:41.609 --rc genhtml_legend=1 00:10:41.609 --rc geninfo_all_blocks=1 00:10:41.609 --rc geninfo_unexecuted_blocks=1 00:10:41.609 00:10:41.609 ' 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:41.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.610 --rc genhtml_branch_coverage=1 00:10:41.610 --rc genhtml_function_coverage=1 00:10:41.610 --rc genhtml_legend=1 00:10:41.610 --rc geninfo_all_blocks=1 00:10:41.610 --rc geninfo_unexecuted_blocks=1 00:10:41.610 00:10:41.610 ' 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:41.610 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:44.144 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:44.144 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:44.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:44.144 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:44.144 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:44.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:10:44.145 00:10:44.145 --- 10.0.0.2 ping statistics --- 00:10:44.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.145 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:44.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:10:44.145 00:10:44.145 --- 10.0.0.1 ping statistics --- 00:10:44.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.145 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2341803 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2341803 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2341803 ']' 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.145 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.145 [2024-12-10 03:58:38.410290] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:10:44.145 [2024-12-10 03:58:38.410390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.145 [2024-12-10 03:58:38.484516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:44.403 [2024-12-10 03:58:38.546274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.403 [2024-12-10 03:58:38.546341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.403 [2024-12-10 03:58:38.546355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.403 [2024-12-10 03:58:38.546380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.403 [2024-12-10 03:58:38.546390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.403 [2024-12-10 03:58:38.548175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.403 [2024-12-10 03:58:38.548241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.403 [2024-12-10 03:58:38.548307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.403 [2024-12-10 03:58:38.548309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.404 [2024-12-10 03:58:38.707296] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.404 [2024-12-10 03:58:38.775164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:44.404 03:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:47.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:58.538 rmmod nvme_tcp 00:10:58.538 rmmod nvme_fabrics 00:10:58.538 rmmod nvme_keyring 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2341803 ']' 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2341803 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2341803 ']' 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2341803 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2341803 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2341803' 00:10:58.538 killing process with pid 2341803 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2341803 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2341803 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.538 03:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.074 03:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:01.074 00:11:01.074 real 0m19.140s 00:11:01.074 user 0m56.827s 00:11:01.074 sys 0m3.512s 00:11:01.074 03:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.074 03:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.074 ************************************ 00:11:01.074 END TEST nvmf_connect_disconnect 00:11:01.074 ************************************ 00:11:01.074 03:58:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:01.074 03:58:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:01.074 03:58:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.074 03:58:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:01.074 ************************************ 00:11:01.074 START TEST nvmf_multitarget 00:11:01.074 ************************************ 00:11:01.074 03:58:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:01.074 * Looking for test storage... 00:11:01.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.075 03:58:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:01.075 03:58:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:11:01.075 03:58:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:01.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.075 --rc genhtml_branch_coverage=1 00:11:01.075 --rc genhtml_function_coverage=1 00:11:01.075 --rc genhtml_legend=1 00:11:01.075 --rc geninfo_all_blocks=1 00:11:01.075 --rc geninfo_unexecuted_blocks=1 00:11:01.075 00:11:01.075 ' 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:01.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.075 --rc genhtml_branch_coverage=1 00:11:01.075 --rc genhtml_function_coverage=1 00:11:01.075 --rc genhtml_legend=1 00:11:01.075 --rc geninfo_all_blocks=1 00:11:01.075 --rc geninfo_unexecuted_blocks=1 00:11:01.075 00:11:01.075 ' 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:01.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.075 --rc genhtml_branch_coverage=1 00:11:01.075 --rc genhtml_function_coverage=1 00:11:01.075 --rc genhtml_legend=1 00:11:01.075 --rc geninfo_all_blocks=1 00:11:01.075 --rc geninfo_unexecuted_blocks=1 00:11:01.075 00:11:01.075 ' 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:01.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.075 --rc genhtml_branch_coverage=1 00:11:01.075 --rc genhtml_function_coverage=1 00:11:01.075 --rc genhtml_legend=1 00:11:01.075 --rc geninfo_all_blocks=1 00:11:01.075 --rc geninfo_unexecuted_blocks=1 00:11:01.075 00:11:01.075 ' 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.075 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:01.076 03:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:02.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:02.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:02.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:02.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.978 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.979 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:02.979 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:02.979 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.979 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.979 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.979 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.979 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:02.979 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:03.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:11:03.237 00:11:03.237 --- 10.0.0.2 ping statistics --- 00:11:03.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.237 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:11:03.237 00:11:03.237 --- 10.0.0.1 ping statistics --- 00:11:03.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.237 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2345570 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2345570 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2345570 ']' 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.237 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:03.237 [2024-12-10 03:58:57.470422] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:03.237 [2024-12-10 03:58:57.470508] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.237 [2024-12-10 03:58:57.541244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.237 [2024-12-10 03:58:57.595786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.237 [2024-12-10 03:58:57.595844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.237 [2024-12-10 03:58:57.595871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.237 [2024-12-10 03:58:57.595882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.237 [2024-12-10 03:58:57.595891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.237 [2024-12-10 03:58:57.597527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.237 [2024-12-10 03:58:57.597594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.237 [2024-12-10 03:58:57.597659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.237 [2024-12-10 03:58:57.597662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.496 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.496 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:03.496 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:03.496 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.496 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:03.496 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.496 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:03.496 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:03.496 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:03.496 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:03.496 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:03.754 "nvmf_tgt_1" 00:11:03.754 03:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:03.754 "nvmf_tgt_2" 00:11:03.754 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:03.754 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:04.011 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:04.011 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:04.011 true 00:11:04.011 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:04.270 true 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:04.270 rmmod nvme_tcp 00:11:04.270 rmmod nvme_fabrics 00:11:04.270 rmmod nvme_keyring 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2345570 ']' 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2345570 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2345570 ']' 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2345570 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.270 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2345570 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2345570' 00:11:04.530 killing process with pid 2345570 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2345570 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2345570 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.530 03:58:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.068 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:07.068 00:11:07.068 real 0m6.004s 00:11:07.068 user 0m6.805s 00:11:07.068 sys 0m2.052s 00:11:07.068 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.068 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:07.068 ************************************ 00:11:07.068 END TEST nvmf_multitarget 00:11:07.068 ************************************ 00:11:07.068 03:59:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:07.068 03:59:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:07.068 03:59:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.068 03:59:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:07.068 ************************************ 00:11:07.068 START TEST nvmf_rpc 00:11:07.068 ************************************ 00:11:07.068 03:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:07.068 * Looking for test storage... 00:11:07.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.068 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:07.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.069 --rc genhtml_branch_coverage=1 00:11:07.069 --rc genhtml_function_coverage=1 00:11:07.069 --rc genhtml_legend=1 00:11:07.069 --rc geninfo_all_blocks=1 00:11:07.069 --rc geninfo_unexecuted_blocks=1 00:11:07.069 00:11:07.069 ' 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:07.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.069 --rc genhtml_branch_coverage=1 00:11:07.069 --rc genhtml_function_coverage=1 00:11:07.069 --rc genhtml_legend=1 00:11:07.069 --rc geninfo_all_blocks=1 00:11:07.069 --rc geninfo_unexecuted_blocks=1 00:11:07.069 00:11:07.069 ' 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:07.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.069 --rc genhtml_branch_coverage=1 00:11:07.069 --rc genhtml_function_coverage=1 00:11:07.069 --rc genhtml_legend=1 00:11:07.069 --rc geninfo_all_blocks=1 00:11:07.069 --rc geninfo_unexecuted_blocks=1 00:11:07.069 00:11:07.069 ' 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:07.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.069 --rc genhtml_branch_coverage=1 00:11:07.069 --rc genhtml_function_coverage=1 00:11:07.069 --rc genhtml_legend=1 00:11:07.069 --rc geninfo_all_blocks=1 00:11:07.069 --rc geninfo_unexecuted_blocks=1 00:11:07.069 00:11:07.069 ' 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:07.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:07.069 03:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:08.975 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:08.976 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:08.976 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:08.976 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:08.976 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.976 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.234 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.234 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.234 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:09.234 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.234 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.234 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.234 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:09.234 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:09.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:11:09.234 00:11:09.234 --- 10.0.0.2 ping statistics --- 00:11:09.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.234 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:11:09.234 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:11:09.234 00:11:09.234 --- 10.0.0.1 ping statistics --- 00:11:09.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.234 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:11:09.234 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.234 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2347681 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2347681 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2347681 ']' 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.235 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.235 [2024-12-10 03:59:03.559063] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:09.235 [2024-12-10 03:59:03.559159] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.493 [2024-12-10 03:59:03.631204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.493 [2024-12-10 03:59:03.687178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.493 [2024-12-10 03:59:03.687227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.493 [2024-12-10 03:59:03.687256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.493 [2024-12-10 03:59:03.687267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.493 [2024-12-10 03:59:03.687276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.493 [2024-12-10 03:59:03.688745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.493 [2024-12-10 03:59:03.688770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.493 [2024-12-10 03:59:03.688831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.493 [2024-12-10 03:59:03.688834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.493 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.493 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:09.493 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:09.493 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:09.493 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.493 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.493 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:09.493 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.493 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.493 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.493 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:09.493 "tick_rate": 2700000000, 00:11:09.493 "poll_groups": [ 00:11:09.493 { 00:11:09.493 "name": "nvmf_tgt_poll_group_000", 00:11:09.493 "admin_qpairs": 0, 00:11:09.493 "io_qpairs": 0, 00:11:09.493 "current_admin_qpairs": 0, 00:11:09.493 "current_io_qpairs": 0, 00:11:09.493 "pending_bdev_io": 0, 00:11:09.493 "completed_nvme_io": 0, 00:11:09.493 "transports": [] 00:11:09.493 }, 00:11:09.493 { 00:11:09.493 "name": "nvmf_tgt_poll_group_001", 00:11:09.493 "admin_qpairs": 0, 00:11:09.493 "io_qpairs": 0, 00:11:09.493 "current_admin_qpairs": 0, 00:11:09.493 "current_io_qpairs": 0, 00:11:09.493 "pending_bdev_io": 0, 00:11:09.493 "completed_nvme_io": 0, 00:11:09.493 "transports": [] 00:11:09.493 }, 00:11:09.493 { 00:11:09.493 "name": "nvmf_tgt_poll_group_002", 00:11:09.493 "admin_qpairs": 0, 00:11:09.493 "io_qpairs": 0, 00:11:09.493 "current_admin_qpairs": 0, 00:11:09.493 "current_io_qpairs": 0, 00:11:09.493 "pending_bdev_io": 0, 00:11:09.493 "completed_nvme_io": 0, 00:11:09.493 "transports": [] 00:11:09.493 }, 00:11:09.493 { 00:11:09.493 "name": "nvmf_tgt_poll_group_003", 00:11:09.493 "admin_qpairs": 0, 00:11:09.493 "io_qpairs": 0, 00:11:09.493 "current_admin_qpairs": 0, 00:11:09.493 "current_io_qpairs": 0, 00:11:09.493 "pending_bdev_io": 0, 00:11:09.493 "completed_nvme_io": 0, 00:11:09.493 "transports": [] 00:11:09.493 } 00:11:09.493 ] 00:11:09.493 }' 00:11:09.493 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:09.493 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:09.493 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:09.493 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:09.751 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:09.751 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:09.751 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:09.751 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.751 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.751 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.751 [2024-12-10 03:59:03.938051] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.751 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.751 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:09.752 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.752 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.752 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.752 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:09.752 "tick_rate": 2700000000, 00:11:09.752 "poll_groups": [ 00:11:09.752 { 00:11:09.752 "name": "nvmf_tgt_poll_group_000", 00:11:09.752 "admin_qpairs": 0, 00:11:09.752 "io_qpairs": 0, 00:11:09.752 "current_admin_qpairs": 0, 00:11:09.752 "current_io_qpairs": 0, 00:11:09.752 "pending_bdev_io": 0, 00:11:09.752 "completed_nvme_io": 0, 00:11:09.752 "transports": [ 00:11:09.752 { 00:11:09.752 "trtype": "TCP" 00:11:09.752 } 00:11:09.752 ] 00:11:09.752 }, 00:11:09.752 { 00:11:09.752 "name": "nvmf_tgt_poll_group_001", 00:11:09.752 "admin_qpairs": 0, 00:11:09.752 "io_qpairs": 0, 00:11:09.752 "current_admin_qpairs": 0, 00:11:09.752 "current_io_qpairs": 0, 00:11:09.752 "pending_bdev_io": 0, 00:11:09.752 "completed_nvme_io": 0, 00:11:09.752 "transports": [ 00:11:09.752 { 00:11:09.752 "trtype": "TCP" 00:11:09.752 } 00:11:09.752 ] 00:11:09.752 }, 00:11:09.752 { 00:11:09.752 "name": "nvmf_tgt_poll_group_002", 00:11:09.752 "admin_qpairs": 0, 00:11:09.752 "io_qpairs": 0, 00:11:09.752 "current_admin_qpairs": 0, 00:11:09.752 "current_io_qpairs": 0, 00:11:09.752 "pending_bdev_io": 0, 00:11:09.752 "completed_nvme_io": 0, 00:11:09.752 "transports": [ 00:11:09.752 { 00:11:09.752 "trtype": "TCP" 00:11:09.752 } 00:11:09.752 ] 00:11:09.752 }, 00:11:09.752 { 00:11:09.752 "name": "nvmf_tgt_poll_group_003", 00:11:09.752 "admin_qpairs": 0, 00:11:09.752 "io_qpairs": 0, 00:11:09.752 "current_admin_qpairs": 0, 00:11:09.752 "current_io_qpairs": 0, 00:11:09.752 "pending_bdev_io": 0, 00:11:09.752 "completed_nvme_io": 0, 00:11:09.752 "transports": [ 00:11:09.752 { 00:11:09.752 "trtype": "TCP" 00:11:09.752 } 00:11:09.752 ] 00:11:09.752 } 00:11:09.752 ] 00:11:09.752 }' 00:11:09.752 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:09.752 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:09.752 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:09.752 03:59:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.752 Malloc1 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.752 [2024-12-10 03:59:04.101368] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:09.752 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:09.752 [2024-12-10 03:59:04.123933] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:10.010 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:10.010 could not add new controller: failed to write to nvme-fabrics device 00:11:10.010 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:10.010 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:10.010 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:10.010 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:10.010 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:10.010 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.010 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.010 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.010 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:10.575 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:10.575 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:10.575 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.575 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:10.575 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:12.526 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:12.526 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:12.526 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:12.527 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:12.784 [2024-12-10 03:59:06.915371] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:12.784 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:12.784 could not add new controller: failed to write to nvme-fabrics device 00:11:12.784 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:12.784 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:12.784 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:12.784 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:12.784 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:12.784 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.784 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.784 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.784 03:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:13.349 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:13.349 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:13.349 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.349 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:13.349 03:59:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:15.244 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:15.244 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:15.244 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.244 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:15.244 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.244 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:15.244 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.502 [2024-12-10 03:59:09.700724] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.502 03:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:16.068 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.068 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:16.068 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.068 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:16.068 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:17.964 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:17.964 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:17.964 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.964 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:17.964 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.964 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:17.964 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.222 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:18.222 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:18.222 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:18.222 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.222 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:18.222 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.222 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:18.222 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:18.222 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.222 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.222 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.223 [2024-12-10 03:59:12.429385] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.223 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.787 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.787 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:18.787 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.787 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:18.787 03:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:20.684 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:20.684 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:20.684 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.684 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:20.684 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.684 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:20.684 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.940 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.941 [2024-12-10 03:59:15.205195] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:20.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:20.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.941 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:21.872 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:21.872 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:21.872 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.872 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:21.872 03:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:23.769 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:23.769 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:23.769 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.769 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:23.769 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.769 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:23.769 03:59:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:23.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.769 [2024-12-10 03:59:18.060883] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.769 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:24.335 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:24.335 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:24.335 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.335 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:24.335 03:59:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:26.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.862 [2024-12-10 03:59:20.835935] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.862 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.429 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.429 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:27.429 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.429 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:27.429 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.326 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 [2024-12-10 03:59:23.742574] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 [2024-12-10 03:59:23.790634] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 [2024-12-10 03:59:23.838791] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 [2024-12-10 03:59:23.886939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 [2024-12-10 03:59:23.935114] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.844 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.844 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:29.844 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.844 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.844 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.844 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:29.844 "tick_rate": 2700000000, 00:11:29.844 "poll_groups": [ 00:11:29.844 { 00:11:29.844 "name": "nvmf_tgt_poll_group_000", 00:11:29.844 "admin_qpairs": 2, 00:11:29.844 "io_qpairs": 84, 00:11:29.844 "current_admin_qpairs": 0, 00:11:29.844 "current_io_qpairs": 0, 00:11:29.844 "pending_bdev_io": 0, 00:11:29.844 "completed_nvme_io": 205, 00:11:29.844 "transports": [ 00:11:29.844 { 00:11:29.844 "trtype": "TCP" 00:11:29.844 } 00:11:29.844 ] 00:11:29.844 }, 00:11:29.844 { 00:11:29.844 "name": "nvmf_tgt_poll_group_001", 00:11:29.844 "admin_qpairs": 2, 00:11:29.844 "io_qpairs": 84, 00:11:29.844 "current_admin_qpairs": 0, 00:11:29.844 "current_io_qpairs": 0, 00:11:29.844 "pending_bdev_io": 0, 00:11:29.844 "completed_nvme_io": 232, 00:11:29.844 "transports": [ 00:11:29.844 { 00:11:29.844 "trtype": "TCP" 00:11:29.844 } 00:11:29.844 ] 00:11:29.844 }, 00:11:29.844 { 00:11:29.844 "name": "nvmf_tgt_poll_group_002", 00:11:29.844 "admin_qpairs": 1, 00:11:29.844 "io_qpairs": 84, 00:11:29.844 "current_admin_qpairs": 0, 00:11:29.844 "current_io_qpairs": 0, 00:11:29.844 "pending_bdev_io": 0, 00:11:29.844 "completed_nvme_io": 135, 00:11:29.844 "transports": [ 00:11:29.844 { 00:11:29.844 "trtype": "TCP" 00:11:29.844 } 00:11:29.844 ] 00:11:29.844 }, 00:11:29.844 { 00:11:29.844 "name": "nvmf_tgt_poll_group_003", 00:11:29.844 "admin_qpairs": 2, 00:11:29.844 "io_qpairs": 84, 00:11:29.844 "current_admin_qpairs": 0, 00:11:29.844 "current_io_qpairs": 0, 00:11:29.844 "pending_bdev_io": 0, 00:11:29.844 "completed_nvme_io": 114, 00:11:29.844 "transports": [ 00:11:29.844 { 00:11:29.844 "trtype": "TCP" 00:11:29.844 } 00:11:29.844 ] 00:11:29.844 } 00:11:29.844 ] 00:11:29.844 }' 00:11:29.844 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:29.844 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:29.844 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:29.844 03:59:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:29.844 rmmod nvme_tcp 00:11:29.844 rmmod nvme_fabrics 00:11:29.844 rmmod nvme_keyring 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:29.844 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:29.845 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2347681 ']' 00:11:29.845 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2347681 00:11:29.845 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2347681 ']' 00:11:29.845 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2347681 00:11:29.845 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:29.845 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.845 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2347681 00:11:29.845 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.845 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.845 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2347681' 00:11:29.845 killing process with pid 2347681 00:11:29.845 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2347681 00:11:29.845 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2347681 00:11:30.104 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:30.104 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:30.104 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:30.104 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:30.104 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:30.104 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:30.104 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:30.104 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:30.104 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:30.104 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.104 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.104 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.650 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:32.650 00:11:32.650 real 0m25.506s 00:11:32.650 user 1m22.386s 00:11:32.650 sys 0m4.329s 00:11:32.650 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.650 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.650 ************************************ 00:11:32.650 END TEST nvmf_rpc 00:11:32.650 ************************************ 00:11:32.650 03:59:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:32.650 03:59:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:32.650 03:59:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.650 03:59:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.650 ************************************ 00:11:32.650 START TEST nvmf_invalid 00:11:32.650 ************************************ 00:11:32.650 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:32.650 * Looking for test storage... 00:11:32.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.650 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:32.650 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:11:32.650 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:32.650 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:32.650 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.650 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:32.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.651 --rc genhtml_branch_coverage=1 00:11:32.651 --rc genhtml_function_coverage=1 00:11:32.651 --rc genhtml_legend=1 00:11:32.651 --rc geninfo_all_blocks=1 00:11:32.651 --rc geninfo_unexecuted_blocks=1 00:11:32.651 00:11:32.651 ' 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:32.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.651 --rc genhtml_branch_coverage=1 00:11:32.651 --rc genhtml_function_coverage=1 00:11:32.651 --rc genhtml_legend=1 00:11:32.651 --rc geninfo_all_blocks=1 00:11:32.651 --rc geninfo_unexecuted_blocks=1 00:11:32.651 00:11:32.651 ' 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:32.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.651 --rc genhtml_branch_coverage=1 00:11:32.651 --rc genhtml_function_coverage=1 00:11:32.651 --rc genhtml_legend=1 00:11:32.651 --rc geninfo_all_blocks=1 00:11:32.651 --rc geninfo_unexecuted_blocks=1 00:11:32.651 00:11:32.651 ' 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:32.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.651 --rc genhtml_branch_coverage=1 00:11:32.651 --rc genhtml_function_coverage=1 00:11:32.651 --rc genhtml_legend=1 00:11:32.651 --rc geninfo_all_blocks=1 00:11:32.651 --rc geninfo_unexecuted_blocks=1 00:11:32.651 00:11:32.651 ' 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:32.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:32.651 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.652 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.652 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.652 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:32.652 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:32.652 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:32.652 03:59:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:34.553 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:34.553 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:34.553 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:34.553 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:34.553 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.554 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.554 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.554 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.554 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:34.554 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:34.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:11:34.812 00:11:34.812 --- 10.0.0.2 ping statistics --- 00:11:34.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.812 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:11:34.812 00:11:34.812 --- 10.0.0.1 ping statistics --- 00:11:34.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.812 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:34.812 03:59:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:34.812 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2352185 00:11:34.812 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.812 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2352185 00:11:34.812 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2352185 ']' 00:11:34.812 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.812 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.812 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.812 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.812 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:34.812 [2024-12-10 03:59:29.049559] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:34.812 [2024-12-10 03:59:29.049649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.812 [2024-12-10 03:59:29.123224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.812 [2024-12-10 03:59:29.181418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.812 [2024-12-10 03:59:29.181465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.812 [2024-12-10 03:59:29.181486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.812 [2024-12-10 03:59:29.181496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.813 [2024-12-10 03:59:29.181505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.813 [2024-12-10 03:59:29.182933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.813 [2024-12-10 03:59:29.182989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.813 [2024-12-10 03:59:29.183056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.813 [2024-12-10 03:59:29.183059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.070 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.070 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:35.070 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:35.070 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:35.070 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:35.070 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.070 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:35.070 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10333 00:11:35.328 [2024-12-10 03:59:29.563608] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:35.328 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:35.328 { 00:11:35.328 "nqn": "nqn.2016-06.io.spdk:cnode10333", 00:11:35.328 "tgt_name": "foobar", 00:11:35.328 "method": "nvmf_create_subsystem", 00:11:35.328 "req_id": 1 00:11:35.328 } 00:11:35.328 Got JSON-RPC error response 00:11:35.328 response: 00:11:35.328 { 00:11:35.328 "code": -32603, 00:11:35.328 "message": "Unable to find target foobar" 00:11:35.328 }' 00:11:35.328 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:35.328 { 00:11:35.328 "nqn": "nqn.2016-06.io.spdk:cnode10333", 00:11:35.328 "tgt_name": "foobar", 00:11:35.328 "method": "nvmf_create_subsystem", 00:11:35.328 "req_id": 1 00:11:35.328 } 00:11:35.328 Got JSON-RPC error response 00:11:35.328 response: 00:11:35.328 { 00:11:35.328 "code": -32603, 00:11:35.328 "message": "Unable to find target foobar" 00:11:35.328 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:35.328 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:35.328 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode28049 00:11:35.585 [2024-12-10 03:59:29.892715] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28049: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:35.585 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:35.585 { 00:11:35.585 "nqn": "nqn.2016-06.io.spdk:cnode28049", 00:11:35.585 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:35.585 "method": "nvmf_create_subsystem", 00:11:35.585 "req_id": 1 00:11:35.585 } 00:11:35.585 Got JSON-RPC error response 00:11:35.585 response: 00:11:35.585 { 00:11:35.585 "code": -32602, 00:11:35.585 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:35.585 }' 00:11:35.585 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:35.585 { 00:11:35.585 "nqn": "nqn.2016-06.io.spdk:cnode28049", 00:11:35.585 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:35.585 "method": "nvmf_create_subsystem", 00:11:35.585 "req_id": 1 00:11:35.585 } 00:11:35.585 Got JSON-RPC error response 00:11:35.585 response: 00:11:35.585 { 00:11:35.585 "code": -32602, 00:11:35.585 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:35.585 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:35.585 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:35.585 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1500 00:11:35.842 [2024-12-10 03:59:30.217863] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1500: invalid model number 'SPDK_Controller' 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:36.100 { 00:11:36.100 "nqn": "nqn.2016-06.io.spdk:cnode1500", 00:11:36.100 "model_number": "SPDK_Controller\u001f", 00:11:36.100 "method": "nvmf_create_subsystem", 00:11:36.100 "req_id": 1 00:11:36.100 } 00:11:36.100 Got JSON-RPC error response 00:11:36.100 response: 00:11:36.100 { 00:11:36.100 "code": -32602, 00:11:36.100 "message": "Invalid MN SPDK_Controller\u001f" 00:11:36.100 }' 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:36.100 { 00:11:36.100 "nqn": "nqn.2016-06.io.spdk:cnode1500", 00:11:36.100 "model_number": "SPDK_Controller\u001f", 00:11:36.100 "method": "nvmf_create_subsystem", 00:11:36.100 "req_id": 1 00:11:36.100 } 00:11:36.100 Got JSON-RPC error response 00:11:36.100 response: 00:11:36.100 { 00:11:36.100 "code": -32602, 00:11:36.100 "message": "Invalid MN SPDK_Controller\u001f" 00:11:36.100 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:36.100 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ` == \- ]] 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '`d~L0)Q'\''CCw*`c/'\''}qPAN' 00:11:36.101 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '`d~L0)Q'\''CCw*`c/'\''}qPAN' nqn.2016-06.io.spdk:cnode17022 00:11:36.360 [2024-12-10 03:59:30.570936] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17022: invalid serial number '`d~L0)Q'CCw*`c/'}qPAN' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:36.360 { 00:11:36.360 "nqn": "nqn.2016-06.io.spdk:cnode17022", 00:11:36.360 "serial_number": "`d~L0)Q'\''CCw*`c/'\''}qPAN", 00:11:36.360 "method": "nvmf_create_subsystem", 00:11:36.360 "req_id": 1 00:11:36.360 } 00:11:36.360 Got JSON-RPC error response 00:11:36.360 response: 00:11:36.360 { 00:11:36.360 "code": -32602, 00:11:36.360 "message": "Invalid SN `d~L0)Q'\''CCw*`c/'\''}qPAN" 00:11:36.360 }' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:36.360 { 00:11:36.360 "nqn": "nqn.2016-06.io.spdk:cnode17022", 00:11:36.360 "serial_number": "`d~L0)Q'CCw*`c/'}qPAN", 00:11:36.360 "method": "nvmf_create_subsystem", 00:11:36.360 "req_id": 1 00:11:36.360 } 00:11:36.360 Got JSON-RPC error response 00:11:36.360 response: 00:11:36.360 { 00:11:36.360 "code": -32602, 00:11:36.360 "message": "Invalid SN `d~L0)Q'CCw*`c/'}qPAN" 00:11:36.360 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:36.360 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.361 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:11:36.362 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:36.362 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:11:36.362 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.362 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.362 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:36.362 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:36.362 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:36.362 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.362 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.362 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ^ == \- ]] 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '^sD@5F@;I<,L9n~5he-,0W)?x=B?rcS)/*yg)TRz' 00:11:36.620 03:59:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '^sD@5F@;I<,L9n~5he-,0W)?x=B?rcS)/*yg)TRz' nqn.2016-06.io.spdk:cnode30404 00:11:36.620 [2024-12-10 03:59:30.996311] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30404: invalid model number '^sD@5F@;I<,L9n~5he-,0W)?x=B?rcS)/*yg)TRz' 00:11:36.878 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:36.878 { 00:11:36.878 "nqn": "nqn.2016-06.io.spdk:cnode30404", 00:11:36.878 "model_number": "^sD@5F@;I<,L9n~5he-,0W)?x=\u007fB?rcS)/*yg)TRz", 00:11:36.878 "method": "nvmf_create_subsystem", 00:11:36.878 "req_id": 1 00:11:36.878 } 00:11:36.878 Got JSON-RPC error response 00:11:36.878 response: 00:11:36.878 { 00:11:36.878 "code": -32602, 00:11:36.878 "message": "Invalid MN ^sD@5F@;I<,L9n~5he-,0W)?x=\u007fB?rcS)/*yg)TRz" 00:11:36.878 }' 00:11:36.878 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:36.878 { 00:11:36.878 "nqn": "nqn.2016-06.io.spdk:cnode30404", 00:11:36.878 "model_number": "^sD@5F@;I<,L9n~5he-,0W)?x=\u007fB?rcS)/*yg)TRz", 00:11:36.878 "method": "nvmf_create_subsystem", 00:11:36.878 "req_id": 1 00:11:36.878 } 00:11:36.878 Got JSON-RPC error response 00:11:36.878 response: 00:11:36.878 { 00:11:36.878 "code": -32602, 00:11:36.878 "message": "Invalid MN ^sD@5F@;I<,L9n~5he-,0W)?x=\u007fB?rcS)/*yg)TRz" 00:11:36.878 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:36.878 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:36.878 [2024-12-10 03:59:31.257260] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.137 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:37.395 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:37.395 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:37.395 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:37.395 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:37.395 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:37.652 [2024-12-10 03:59:31.819109] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:37.652 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:37.652 { 00:11:37.652 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:37.652 "listen_address": { 00:11:37.652 "trtype": "tcp", 00:11:37.652 "traddr": "", 00:11:37.652 "trsvcid": "4421" 00:11:37.652 }, 00:11:37.652 "method": "nvmf_subsystem_remove_listener", 00:11:37.652 "req_id": 1 00:11:37.652 } 00:11:37.652 Got JSON-RPC error response 00:11:37.652 response: 00:11:37.652 { 00:11:37.652 "code": -32602, 00:11:37.652 "message": "Invalid parameters" 00:11:37.652 }' 00:11:37.652 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:37.652 { 00:11:37.652 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:37.652 "listen_address": { 00:11:37.652 "trtype": "tcp", 00:11:37.652 "traddr": "", 00:11:37.652 "trsvcid": "4421" 00:11:37.652 }, 00:11:37.652 "method": "nvmf_subsystem_remove_listener", 00:11:37.652 "req_id": 1 00:11:37.652 } 00:11:37.652 Got JSON-RPC error response 00:11:37.652 response: 00:11:37.652 { 00:11:37.652 "code": -32602, 00:11:37.652 "message": "Invalid parameters" 00:11:37.652 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:37.652 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1400 -i 0 00:11:37.910 [2024-12-10 03:59:32.091942] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1400: invalid cntlid range [0-65519] 00:11:37.910 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:37.910 { 00:11:37.910 "nqn": "nqn.2016-06.io.spdk:cnode1400", 00:11:37.910 "min_cntlid": 0, 00:11:37.910 "method": "nvmf_create_subsystem", 00:11:37.910 "req_id": 1 00:11:37.910 } 00:11:37.910 Got JSON-RPC error response 00:11:37.910 response: 00:11:37.910 { 00:11:37.910 "code": -32602, 00:11:37.910 "message": "Invalid cntlid range [0-65519]" 00:11:37.910 }' 00:11:37.910 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:37.910 { 00:11:37.910 "nqn": "nqn.2016-06.io.spdk:cnode1400", 00:11:37.910 "min_cntlid": 0, 00:11:37.910 "method": "nvmf_create_subsystem", 00:11:37.910 "req_id": 1 00:11:37.910 } 00:11:37.910 Got JSON-RPC error response 00:11:37.910 response: 00:11:37.910 { 00:11:37.910 "code": -32602, 00:11:37.910 "message": "Invalid cntlid range [0-65519]" 00:11:37.910 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:37.910 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30758 -i 65520 00:11:38.169 [2024-12-10 03:59:32.425064] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30758: invalid cntlid range [65520-65519] 00:11:38.169 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:38.169 { 00:11:38.169 "nqn": "nqn.2016-06.io.spdk:cnode30758", 00:11:38.169 "min_cntlid": 65520, 00:11:38.169 "method": "nvmf_create_subsystem", 00:11:38.169 "req_id": 1 00:11:38.169 } 00:11:38.169 Got JSON-RPC error response 00:11:38.169 response: 00:11:38.169 { 00:11:38.169 "code": -32602, 00:11:38.169 "message": "Invalid cntlid range [65520-65519]" 00:11:38.169 }' 00:11:38.169 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:38.169 { 00:11:38.169 "nqn": "nqn.2016-06.io.spdk:cnode30758", 00:11:38.169 "min_cntlid": 65520, 00:11:38.169 "method": "nvmf_create_subsystem", 00:11:38.169 "req_id": 1 00:11:38.169 } 00:11:38.169 Got JSON-RPC error response 00:11:38.169 response: 00:11:38.169 { 00:11:38.169 "code": -32602, 00:11:38.169 "message": "Invalid cntlid range [65520-65519]" 00:11:38.169 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:38.169 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode39 -I 0 00:11:38.426 [2024-12-10 03:59:32.738073] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode39: invalid cntlid range [1-0] 00:11:38.426 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:38.426 { 00:11:38.426 "nqn": "nqn.2016-06.io.spdk:cnode39", 00:11:38.426 "max_cntlid": 0, 00:11:38.426 "method": "nvmf_create_subsystem", 00:11:38.426 "req_id": 1 00:11:38.426 } 00:11:38.426 Got JSON-RPC error response 00:11:38.426 response: 00:11:38.426 { 00:11:38.426 "code": -32602, 00:11:38.426 "message": "Invalid cntlid range [1-0]" 00:11:38.426 }' 00:11:38.426 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:38.426 { 00:11:38.426 "nqn": "nqn.2016-06.io.spdk:cnode39", 00:11:38.427 "max_cntlid": 0, 00:11:38.427 "method": "nvmf_create_subsystem", 00:11:38.427 "req_id": 1 00:11:38.427 } 00:11:38.427 Got JSON-RPC error response 00:11:38.427 response: 00:11:38.427 { 00:11:38.427 "code": -32602, 00:11:38.427 "message": "Invalid cntlid range [1-0]" 00:11:38.427 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:38.427 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2417 -I 65520 00:11:38.684 [2024-12-10 03:59:33.006984] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2417: invalid cntlid range [1-65520] 00:11:38.684 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:38.684 { 00:11:38.684 "nqn": "nqn.2016-06.io.spdk:cnode2417", 00:11:38.684 "max_cntlid": 65520, 00:11:38.684 "method": "nvmf_create_subsystem", 00:11:38.684 "req_id": 1 00:11:38.684 } 00:11:38.684 Got JSON-RPC error response 00:11:38.684 response: 00:11:38.684 { 00:11:38.684 "code": -32602, 00:11:38.684 "message": "Invalid cntlid range [1-65520]" 00:11:38.684 }' 00:11:38.684 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:38.684 { 00:11:38.684 "nqn": "nqn.2016-06.io.spdk:cnode2417", 00:11:38.684 "max_cntlid": 65520, 00:11:38.684 "method": "nvmf_create_subsystem", 00:11:38.685 "req_id": 1 00:11:38.685 } 00:11:38.685 Got JSON-RPC error response 00:11:38.685 response: 00:11:38.685 { 00:11:38.685 "code": -32602, 00:11:38.685 "message": "Invalid cntlid range [1-65520]" 00:11:38.685 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:38.685 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11565 -i 6 -I 5 00:11:38.942 [2024-12-10 03:59:33.275911] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11565: invalid cntlid range [6-5] 00:11:38.942 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:38.942 { 00:11:38.942 "nqn": "nqn.2016-06.io.spdk:cnode11565", 00:11:38.942 "min_cntlid": 6, 00:11:38.942 "max_cntlid": 5, 00:11:38.942 "method": "nvmf_create_subsystem", 00:11:38.942 "req_id": 1 00:11:38.942 } 00:11:38.942 Got JSON-RPC error response 00:11:38.942 response: 00:11:38.942 { 00:11:38.942 "code": -32602, 00:11:38.942 "message": "Invalid cntlid range [6-5]" 00:11:38.942 }' 00:11:38.942 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:38.942 { 00:11:38.942 "nqn": "nqn.2016-06.io.spdk:cnode11565", 00:11:38.942 "min_cntlid": 6, 00:11:38.942 "max_cntlid": 5, 00:11:38.942 "method": "nvmf_create_subsystem", 00:11:38.942 "req_id": 1 00:11:38.942 } 00:11:38.942 Got JSON-RPC error response 00:11:38.942 response: 00:11:38.942 { 00:11:38.942 "code": -32602, 00:11:38.942 "message": "Invalid cntlid range [6-5]" 00:11:38.942 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:38.942 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:39.200 { 00:11:39.200 "name": "foobar", 00:11:39.200 "method": "nvmf_delete_target", 00:11:39.200 "req_id": 1 00:11:39.200 } 00:11:39.200 Got JSON-RPC error response 00:11:39.200 response: 00:11:39.200 { 00:11:39.200 "code": -32602, 00:11:39.200 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:39.200 }' 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:39.200 { 00:11:39.200 "name": "foobar", 00:11:39.200 "method": "nvmf_delete_target", 00:11:39.200 "req_id": 1 00:11:39.200 } 00:11:39.200 Got JSON-RPC error response 00:11:39.200 response: 00:11:39.200 { 00:11:39.200 "code": -32602, 00:11:39.200 "message": "The specified target doesn't exist, cannot delete it." 00:11:39.200 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.200 rmmod nvme_tcp 00:11:39.200 rmmod nvme_fabrics 00:11:39.200 rmmod nvme_keyring 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2352185 ']' 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2352185 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2352185 ']' 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2352185 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2352185 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2352185' 00:11:39.200 killing process with pid 2352185 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2352185 00:11:39.200 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2352185 00:11:39.460 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:39.460 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:39.460 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:39.460 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:11:39.460 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:11:39.460 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:39.460 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:11:39.460 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:39.460 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:39.460 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.460 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.460 03:59:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:42.025 00:11:42.025 real 0m9.228s 00:11:42.025 user 0m22.423s 00:11:42.025 sys 0m2.500s 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:42.025 ************************************ 00:11:42.025 END TEST nvmf_invalid 00:11:42.025 ************************************ 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.025 ************************************ 00:11:42.025 START TEST nvmf_connect_stress 00:11:42.025 ************************************ 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:42.025 * Looking for test storage... 00:11:42.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.025 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.026 --rc genhtml_branch_coverage=1 00:11:42.026 --rc genhtml_function_coverage=1 00:11:42.026 --rc genhtml_legend=1 00:11:42.026 --rc geninfo_all_blocks=1 00:11:42.026 --rc geninfo_unexecuted_blocks=1 00:11:42.026 00:11:42.026 ' 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.026 --rc genhtml_branch_coverage=1 00:11:42.026 --rc genhtml_function_coverage=1 00:11:42.026 --rc genhtml_legend=1 00:11:42.026 --rc geninfo_all_blocks=1 00:11:42.026 --rc geninfo_unexecuted_blocks=1 00:11:42.026 00:11:42.026 ' 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.026 --rc genhtml_branch_coverage=1 00:11:42.026 --rc genhtml_function_coverage=1 00:11:42.026 --rc genhtml_legend=1 00:11:42.026 --rc geninfo_all_blocks=1 00:11:42.026 --rc geninfo_unexecuted_blocks=1 00:11:42.026 00:11:42.026 ' 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.026 --rc genhtml_branch_coverage=1 00:11:42.026 --rc genhtml_function_coverage=1 00:11:42.026 --rc genhtml_legend=1 00:11:42.026 --rc geninfo_all_blocks=1 00:11:42.026 --rc geninfo_unexecuted_blocks=1 00:11:42.026 00:11:42.026 ' 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.026 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.026 03:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:42.026 03:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:42.026 03:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:42.026 03:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.930 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:43.931 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:43.931 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:43.931 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:43.931 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.931 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.189 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.189 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.189 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:44.189 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.189 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:44.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:11:44.190 00:11:44.190 --- 10.0.0.2 ping statistics --- 00:11:44.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.190 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:11:44.190 00:11:44.190 --- 10.0.0.1 ping statistics --- 00:11:44.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.190 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2354948 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2354948 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2354948 ']' 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.190 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.190 [2024-12-10 03:59:38.453774] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:44.190 [2024-12-10 03:59:38.453869] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.190 [2024-12-10 03:59:38.529427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:44.448 [2024-12-10 03:59:38.587655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.448 [2024-12-10 03:59:38.587703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.448 [2024-12-10 03:59:38.587716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.448 [2024-12-10 03:59:38.587727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.448 [2024-12-10 03:59:38.587737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.448 [2024-12-10 03:59:38.589217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.448 [2024-12-10 03:59:38.589314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.448 [2024-12-10 03:59:38.589323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.448 [2024-12-10 03:59:38.731082] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.448 [2024-12-10 03:59:38.748455] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.448 NULL1 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2354975 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.448 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.449 03:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.015 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.015 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:45.015 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.015 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.015 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.273 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.273 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:45.273 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.273 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.273 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.531 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.531 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:45.531 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.531 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.531 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.789 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.789 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:45.789 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.789 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.789 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.047 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.047 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:46.047 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.047 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.047 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.612 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.612 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:46.612 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.612 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.612 03:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.870 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.870 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:46.870 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.870 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.870 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.128 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.128 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:47.128 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.128 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.128 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.386 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.386 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:47.386 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.386 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.386 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.644 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.644 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:47.644 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.644 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.644 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.210 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.210 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:48.210 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.210 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.210 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.468 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.468 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:48.468 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.468 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.468 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.727 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.727 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:48.727 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.727 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.727 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.985 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.985 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:48.985 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.985 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.985 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.550 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.550 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:49.550 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.550 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.550 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.808 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.808 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:49.808 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.808 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.808 03:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.065 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.065 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:50.065 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.065 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.065 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.323 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.323 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:50.323 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.323 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.323 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.581 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.581 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:50.581 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.581 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.581 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.156 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.156 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:51.156 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.156 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.156 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.413 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.413 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:51.413 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.413 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.413 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.671 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.671 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:51.671 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.671 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.671 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.929 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.929 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:51.929 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.929 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.929 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.187 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.187 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:52.187 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.187 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.187 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.753 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.753 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:52.753 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.753 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.753 03:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.010 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.010 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:53.010 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.010 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.010 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.268 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.268 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:53.268 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.268 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.268 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.525 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.525 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:53.525 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.525 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.525 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.783 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.783 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:53.783 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.783 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.783 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.349 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.349 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:54.349 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.349 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.349 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.606 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.606 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:54.606 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.607 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.607 03:59:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.607 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2354975 00:11:54.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2354975) - No such process 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2354975 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.865 rmmod nvme_tcp 00:11:54.865 rmmod nvme_fabrics 00:11:54.865 rmmod nvme_keyring 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2354948 ']' 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2354948 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2354948 ']' 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2354948 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2354948 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2354948' 00:11:54.865 killing process with pid 2354948 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2354948 00:11:54.865 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2354948 00:11:55.123 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:55.123 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:55.123 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:55.123 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:55.123 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:55.123 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:55.123 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:55.123 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:55.123 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:55.123 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.123 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.123 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.661 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:57.661 00:11:57.661 real 0m15.637s 00:11:57.661 user 0m38.752s 00:11:57.661 sys 0m5.937s 00:11:57.661 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.661 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.661 ************************************ 00:11:57.661 END TEST nvmf_connect_stress 00:11:57.662 ************************************ 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:57.662 ************************************ 00:11:57.662 START TEST nvmf_fused_ordering 00:11:57.662 ************************************ 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:57.662 * Looking for test storage... 00:11:57.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:57.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.662 --rc genhtml_branch_coverage=1 00:11:57.662 --rc genhtml_function_coverage=1 00:11:57.662 --rc genhtml_legend=1 00:11:57.662 --rc geninfo_all_blocks=1 00:11:57.662 --rc geninfo_unexecuted_blocks=1 00:11:57.662 00:11:57.662 ' 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:57.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.662 --rc genhtml_branch_coverage=1 00:11:57.662 --rc genhtml_function_coverage=1 00:11:57.662 --rc genhtml_legend=1 00:11:57.662 --rc geninfo_all_blocks=1 00:11:57.662 --rc geninfo_unexecuted_blocks=1 00:11:57.662 00:11:57.662 ' 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:57.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.662 --rc genhtml_branch_coverage=1 00:11:57.662 --rc genhtml_function_coverage=1 00:11:57.662 --rc genhtml_legend=1 00:11:57.662 --rc geninfo_all_blocks=1 00:11:57.662 --rc geninfo_unexecuted_blocks=1 00:11:57.662 00:11:57.662 ' 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:57.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.662 --rc genhtml_branch_coverage=1 00:11:57.662 --rc genhtml_function_coverage=1 00:11:57.662 --rc genhtml_legend=1 00:11:57.662 --rc geninfo_all_blocks=1 00:11:57.662 --rc geninfo_unexecuted_blocks=1 00:11:57.662 00:11:57.662 ' 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:57.662 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:57.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:11:57.663 03:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:59.565 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.565 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:59.566 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:59.566 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:59.566 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:59.566 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:59.566 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.826 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.826 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.826 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:59.826 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:59.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:11:59.826 00:11:59.826 --- 10.0.0.2 ping statistics --- 00:11:59.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.826 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:11:59.826 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:11:59.826 00:11:59.826 --- 10.0.0.1 ping statistics --- 00:11:59.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.826 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:11:59.826 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.826 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:11:59.826 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:59.826 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.826 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:59.826 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:59.826 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.826 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:59.826 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:59.826 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:59.826 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:59.826 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:59.826 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:59.826 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2358129 00:11:59.826 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2358129 00:11:59.826 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:59.826 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2358129 ']' 00:11:59.826 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.826 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.826 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.826 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.826 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:59.826 [2024-12-10 03:59:54.081683] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:59.826 [2024-12-10 03:59:54.081773] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.826 [2024-12-10 03:59:54.154095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.826 [2024-12-10 03:59:54.208603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.826 [2024-12-10 03:59:54.208658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.826 [2024-12-10 03:59:54.208672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.826 [2024-12-10 03:59:54.208684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.826 [2024-12-10 03:59:54.208695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.085 [2024-12-10 03:59:54.209323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.085 [2024-12-10 03:59:54.383847] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.085 [2024-12-10 03:59:54.400073] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.085 NULL1 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.085 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:00.085 [2024-12-10 03:59:54.443567] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:00.085 [2024-12-10 03:59:54.443615] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358270 ] 00:12:00.650 Attached to nqn.2016-06.io.spdk:cnode1 00:12:00.650 Namespace ID: 1 size: 1GB 00:12:00.650 fused_ordering(0) 00:12:00.650 fused_ordering(1) 00:12:00.650 fused_ordering(2) 00:12:00.650 fused_ordering(3) 00:12:00.650 fused_ordering(4) 00:12:00.650 fused_ordering(5) 00:12:00.650 fused_ordering(6) 00:12:00.650 fused_ordering(7) 00:12:00.650 fused_ordering(8) 00:12:00.650 fused_ordering(9) 00:12:00.650 fused_ordering(10) 00:12:00.650 fused_ordering(11) 00:12:00.650 fused_ordering(12) 00:12:00.650 fused_ordering(13) 00:12:00.650 fused_ordering(14) 00:12:00.650 fused_ordering(15) 00:12:00.650 fused_ordering(16) 00:12:00.650 fused_ordering(17) 00:12:00.650 fused_ordering(18) 00:12:00.650 fused_ordering(19) 00:12:00.650 fused_ordering(20) 00:12:00.650 fused_ordering(21) 00:12:00.650 fused_ordering(22) 00:12:00.650 fused_ordering(23) 00:12:00.650 fused_ordering(24) 00:12:00.650 fused_ordering(25) 00:12:00.650 fused_ordering(26) 00:12:00.650 fused_ordering(27) 00:12:00.650 fused_ordering(28) 00:12:00.650 fused_ordering(29) 00:12:00.650 fused_ordering(30) 00:12:00.650 fused_ordering(31) 00:12:00.650 fused_ordering(32) 00:12:00.650 fused_ordering(33) 00:12:00.650 fused_ordering(34) 00:12:00.650 fused_ordering(35) 00:12:00.650 fused_ordering(36) 00:12:00.650 fused_ordering(37) 00:12:00.651 fused_ordering(38) 00:12:00.651 fused_ordering(39) 00:12:00.651 fused_ordering(40) 00:12:00.651 fused_ordering(41) 00:12:00.651 fused_ordering(42) 00:12:00.651 fused_ordering(43) 00:12:00.651 fused_ordering(44) 00:12:00.651 fused_ordering(45) 00:12:00.651 fused_ordering(46) 00:12:00.651 fused_ordering(47) 00:12:00.651 fused_ordering(48) 00:12:00.651 fused_ordering(49) 00:12:00.651 fused_ordering(50) 00:12:00.651 fused_ordering(51) 00:12:00.651 fused_ordering(52) 00:12:00.651 fused_ordering(53) 00:12:00.651 fused_ordering(54) 00:12:00.651 fused_ordering(55) 00:12:00.651 fused_ordering(56) 00:12:00.651 fused_ordering(57) 00:12:00.651 fused_ordering(58) 00:12:00.651 fused_ordering(59) 00:12:00.651 fused_ordering(60) 00:12:00.651 fused_ordering(61) 00:12:00.651 fused_ordering(62) 00:12:00.651 fused_ordering(63) 00:12:00.651 fused_ordering(64) 00:12:00.651 fused_ordering(65) 00:12:00.651 fused_ordering(66) 00:12:00.651 fused_ordering(67) 00:12:00.651 fused_ordering(68) 00:12:00.651 fused_ordering(69) 00:12:00.651 fused_ordering(70) 00:12:00.651 fused_ordering(71) 00:12:00.651 fused_ordering(72) 00:12:00.651 fused_ordering(73) 00:12:00.651 fused_ordering(74) 00:12:00.651 fused_ordering(75) 00:12:00.651 fused_ordering(76) 00:12:00.651 fused_ordering(77) 00:12:00.651 fused_ordering(78) 00:12:00.651 fused_ordering(79) 00:12:00.651 fused_ordering(80) 00:12:00.651 fused_ordering(81) 00:12:00.651 fused_ordering(82) 00:12:00.651 fused_ordering(83) 00:12:00.651 fused_ordering(84) 00:12:00.651 fused_ordering(85) 00:12:00.651 fused_ordering(86) 00:12:00.651 fused_ordering(87) 00:12:00.651 fused_ordering(88) 00:12:00.651 fused_ordering(89) 00:12:00.651 fused_ordering(90) 00:12:00.651 fused_ordering(91) 00:12:00.651 fused_ordering(92) 00:12:00.651 fused_ordering(93) 00:12:00.651 fused_ordering(94) 00:12:00.651 fused_ordering(95) 00:12:00.651 fused_ordering(96) 00:12:00.651 fused_ordering(97) 00:12:00.651 fused_ordering(98) 00:12:00.651 fused_ordering(99) 00:12:00.651 fused_ordering(100) 00:12:00.651 fused_ordering(101) 00:12:00.651 fused_ordering(102) 00:12:00.651 fused_ordering(103) 00:12:00.651 fused_ordering(104) 00:12:00.651 fused_ordering(105) 00:12:00.651 fused_ordering(106) 00:12:00.651 fused_ordering(107) 00:12:00.651 fused_ordering(108) 00:12:00.651 fused_ordering(109) 00:12:00.651 fused_ordering(110) 00:12:00.651 fused_ordering(111) 00:12:00.651 fused_ordering(112) 00:12:00.651 fused_ordering(113) 00:12:00.651 fused_ordering(114) 00:12:00.651 fused_ordering(115) 00:12:00.651 fused_ordering(116) 00:12:00.651 fused_ordering(117) 00:12:00.651 fused_ordering(118) 00:12:00.651 fused_ordering(119) 00:12:00.651 fused_ordering(120) 00:12:00.651 fused_ordering(121) 00:12:00.651 fused_ordering(122) 00:12:00.651 fused_ordering(123) 00:12:00.651 fused_ordering(124) 00:12:00.651 fused_ordering(125) 00:12:00.651 fused_ordering(126) 00:12:00.651 fused_ordering(127) 00:12:00.651 fused_ordering(128) 00:12:00.651 fused_ordering(129) 00:12:00.651 fused_ordering(130) 00:12:00.651 fused_ordering(131) 00:12:00.651 fused_ordering(132) 00:12:00.651 fused_ordering(133) 00:12:00.651 fused_ordering(134) 00:12:00.651 fused_ordering(135) 00:12:00.651 fused_ordering(136) 00:12:00.651 fused_ordering(137) 00:12:00.651 fused_ordering(138) 00:12:00.651 fused_ordering(139) 00:12:00.651 fused_ordering(140) 00:12:00.651 fused_ordering(141) 00:12:00.651 fused_ordering(142) 00:12:00.651 fused_ordering(143) 00:12:00.651 fused_ordering(144) 00:12:00.651 fused_ordering(145) 00:12:00.651 fused_ordering(146) 00:12:00.651 fused_ordering(147) 00:12:00.651 fused_ordering(148) 00:12:00.651 fused_ordering(149) 00:12:00.651 fused_ordering(150) 00:12:00.651 fused_ordering(151) 00:12:00.651 fused_ordering(152) 00:12:00.651 fused_ordering(153) 00:12:00.651 fused_ordering(154) 00:12:00.651 fused_ordering(155) 00:12:00.651 fused_ordering(156) 00:12:00.651 fused_ordering(157) 00:12:00.651 fused_ordering(158) 00:12:00.651 fused_ordering(159) 00:12:00.651 fused_ordering(160) 00:12:00.651 fused_ordering(161) 00:12:00.651 fused_ordering(162) 00:12:00.651 fused_ordering(163) 00:12:00.651 fused_ordering(164) 00:12:00.651 fused_ordering(165) 00:12:00.651 fused_ordering(166) 00:12:00.651 fused_ordering(167) 00:12:00.651 fused_ordering(168) 00:12:00.651 fused_ordering(169) 00:12:00.651 fused_ordering(170) 00:12:00.651 fused_ordering(171) 00:12:00.651 fused_ordering(172) 00:12:00.651 fused_ordering(173) 00:12:00.651 fused_ordering(174) 00:12:00.651 fused_ordering(175) 00:12:00.651 fused_ordering(176) 00:12:00.651 fused_ordering(177) 00:12:00.651 fused_ordering(178) 00:12:00.651 fused_ordering(179) 00:12:00.651 fused_ordering(180) 00:12:00.651 fused_ordering(181) 00:12:00.651 fused_ordering(182) 00:12:00.651 fused_ordering(183) 00:12:00.651 fused_ordering(184) 00:12:00.651 fused_ordering(185) 00:12:00.651 fused_ordering(186) 00:12:00.651 fused_ordering(187) 00:12:00.651 fused_ordering(188) 00:12:00.651 fused_ordering(189) 00:12:00.651 fused_ordering(190) 00:12:00.651 fused_ordering(191) 00:12:00.651 fused_ordering(192) 00:12:00.651 fused_ordering(193) 00:12:00.651 fused_ordering(194) 00:12:00.651 fused_ordering(195) 00:12:00.651 fused_ordering(196) 00:12:00.651 fused_ordering(197) 00:12:00.651 fused_ordering(198) 00:12:00.651 fused_ordering(199) 00:12:00.651 fused_ordering(200) 00:12:00.651 fused_ordering(201) 00:12:00.651 fused_ordering(202) 00:12:00.651 fused_ordering(203) 00:12:00.651 fused_ordering(204) 00:12:00.651 fused_ordering(205) 00:12:00.908 fused_ordering(206) 00:12:00.908 fused_ordering(207) 00:12:00.908 fused_ordering(208) 00:12:00.908 fused_ordering(209) 00:12:00.908 fused_ordering(210) 00:12:00.908 fused_ordering(211) 00:12:00.908 fused_ordering(212) 00:12:00.908 fused_ordering(213) 00:12:00.908 fused_ordering(214) 00:12:00.908 fused_ordering(215) 00:12:00.908 fused_ordering(216) 00:12:00.908 fused_ordering(217) 00:12:00.908 fused_ordering(218) 00:12:00.908 fused_ordering(219) 00:12:00.908 fused_ordering(220) 00:12:00.908 fused_ordering(221) 00:12:00.908 fused_ordering(222) 00:12:00.908 fused_ordering(223) 00:12:00.908 fused_ordering(224) 00:12:00.908 fused_ordering(225) 00:12:00.908 fused_ordering(226) 00:12:00.908 fused_ordering(227) 00:12:00.908 fused_ordering(228) 00:12:00.908 fused_ordering(229) 00:12:00.908 fused_ordering(230) 00:12:00.908 fused_ordering(231) 00:12:00.908 fused_ordering(232) 00:12:00.908 fused_ordering(233) 00:12:00.908 fused_ordering(234) 00:12:00.908 fused_ordering(235) 00:12:00.908 fused_ordering(236) 00:12:00.908 fused_ordering(237) 00:12:00.908 fused_ordering(238) 00:12:00.908 fused_ordering(239) 00:12:00.908 fused_ordering(240) 00:12:00.908 fused_ordering(241) 00:12:00.908 fused_ordering(242) 00:12:00.908 fused_ordering(243) 00:12:00.908 fused_ordering(244) 00:12:00.908 fused_ordering(245) 00:12:00.908 fused_ordering(246) 00:12:00.908 fused_ordering(247) 00:12:00.908 fused_ordering(248) 00:12:00.908 fused_ordering(249) 00:12:00.908 fused_ordering(250) 00:12:00.908 fused_ordering(251) 00:12:00.908 fused_ordering(252) 00:12:00.908 fused_ordering(253) 00:12:00.908 fused_ordering(254) 00:12:00.908 fused_ordering(255) 00:12:00.908 fused_ordering(256) 00:12:00.908 fused_ordering(257) 00:12:00.908 fused_ordering(258) 00:12:00.908 fused_ordering(259) 00:12:00.908 fused_ordering(260) 00:12:00.908 fused_ordering(261) 00:12:00.908 fused_ordering(262) 00:12:00.908 fused_ordering(263) 00:12:00.908 fused_ordering(264) 00:12:00.908 fused_ordering(265) 00:12:00.908 fused_ordering(266) 00:12:00.908 fused_ordering(267) 00:12:00.909 fused_ordering(268) 00:12:00.909 fused_ordering(269) 00:12:00.909 fused_ordering(270) 00:12:00.909 fused_ordering(271) 00:12:00.909 fused_ordering(272) 00:12:00.909 fused_ordering(273) 00:12:00.909 fused_ordering(274) 00:12:00.909 fused_ordering(275) 00:12:00.909 fused_ordering(276) 00:12:00.909 fused_ordering(277) 00:12:00.909 fused_ordering(278) 00:12:00.909 fused_ordering(279) 00:12:00.909 fused_ordering(280) 00:12:00.909 fused_ordering(281) 00:12:00.909 fused_ordering(282) 00:12:00.909 fused_ordering(283) 00:12:00.909 fused_ordering(284) 00:12:00.909 fused_ordering(285) 00:12:00.909 fused_ordering(286) 00:12:00.909 fused_ordering(287) 00:12:00.909 fused_ordering(288) 00:12:00.909 fused_ordering(289) 00:12:00.909 fused_ordering(290) 00:12:00.909 fused_ordering(291) 00:12:00.909 fused_ordering(292) 00:12:00.909 fused_ordering(293) 00:12:00.909 fused_ordering(294) 00:12:00.909 fused_ordering(295) 00:12:00.909 fused_ordering(296) 00:12:00.909 fused_ordering(297) 00:12:00.909 fused_ordering(298) 00:12:00.909 fused_ordering(299) 00:12:00.909 fused_ordering(300) 00:12:00.909 fused_ordering(301) 00:12:00.909 fused_ordering(302) 00:12:00.909 fused_ordering(303) 00:12:00.909 fused_ordering(304) 00:12:00.909 fused_ordering(305) 00:12:00.909 fused_ordering(306) 00:12:00.909 fused_ordering(307) 00:12:00.909 fused_ordering(308) 00:12:00.909 fused_ordering(309) 00:12:00.909 fused_ordering(310) 00:12:00.909 fused_ordering(311) 00:12:00.909 fused_ordering(312) 00:12:00.909 fused_ordering(313) 00:12:00.909 fused_ordering(314) 00:12:00.909 fused_ordering(315) 00:12:00.909 fused_ordering(316) 00:12:00.909 fused_ordering(317) 00:12:00.909 fused_ordering(318) 00:12:00.909 fused_ordering(319) 00:12:00.909 fused_ordering(320) 00:12:00.909 fused_ordering(321) 00:12:00.909 fused_ordering(322) 00:12:00.909 fused_ordering(323) 00:12:00.909 fused_ordering(324) 00:12:00.909 fused_ordering(325) 00:12:00.909 fused_ordering(326) 00:12:00.909 fused_ordering(327) 00:12:00.909 fused_ordering(328) 00:12:00.909 fused_ordering(329) 00:12:00.909 fused_ordering(330) 00:12:00.909 fused_ordering(331) 00:12:00.909 fused_ordering(332) 00:12:00.909 fused_ordering(333) 00:12:00.909 fused_ordering(334) 00:12:00.909 fused_ordering(335) 00:12:00.909 fused_ordering(336) 00:12:00.909 fused_ordering(337) 00:12:00.909 fused_ordering(338) 00:12:00.909 fused_ordering(339) 00:12:00.909 fused_ordering(340) 00:12:00.909 fused_ordering(341) 00:12:00.909 fused_ordering(342) 00:12:00.909 fused_ordering(343) 00:12:00.909 fused_ordering(344) 00:12:00.909 fused_ordering(345) 00:12:00.909 fused_ordering(346) 00:12:00.909 fused_ordering(347) 00:12:00.909 fused_ordering(348) 00:12:00.909 fused_ordering(349) 00:12:00.909 fused_ordering(350) 00:12:00.909 fused_ordering(351) 00:12:00.909 fused_ordering(352) 00:12:00.909 fused_ordering(353) 00:12:00.909 fused_ordering(354) 00:12:00.909 fused_ordering(355) 00:12:00.909 fused_ordering(356) 00:12:00.909 fused_ordering(357) 00:12:00.909 fused_ordering(358) 00:12:00.909 fused_ordering(359) 00:12:00.909 fused_ordering(360) 00:12:00.909 fused_ordering(361) 00:12:00.909 fused_ordering(362) 00:12:00.909 fused_ordering(363) 00:12:00.909 fused_ordering(364) 00:12:00.909 fused_ordering(365) 00:12:00.909 fused_ordering(366) 00:12:00.909 fused_ordering(367) 00:12:00.909 fused_ordering(368) 00:12:00.909 fused_ordering(369) 00:12:00.909 fused_ordering(370) 00:12:00.909 fused_ordering(371) 00:12:00.909 fused_ordering(372) 00:12:00.909 fused_ordering(373) 00:12:00.909 fused_ordering(374) 00:12:00.909 fused_ordering(375) 00:12:00.909 fused_ordering(376) 00:12:00.909 fused_ordering(377) 00:12:00.909 fused_ordering(378) 00:12:00.909 fused_ordering(379) 00:12:00.909 fused_ordering(380) 00:12:00.909 fused_ordering(381) 00:12:00.909 fused_ordering(382) 00:12:00.909 fused_ordering(383) 00:12:00.909 fused_ordering(384) 00:12:00.909 fused_ordering(385) 00:12:00.909 fused_ordering(386) 00:12:00.909 fused_ordering(387) 00:12:00.909 fused_ordering(388) 00:12:00.909 fused_ordering(389) 00:12:00.909 fused_ordering(390) 00:12:00.909 fused_ordering(391) 00:12:00.909 fused_ordering(392) 00:12:00.909 fused_ordering(393) 00:12:00.909 fused_ordering(394) 00:12:00.909 fused_ordering(395) 00:12:00.909 fused_ordering(396) 00:12:00.909 fused_ordering(397) 00:12:00.909 fused_ordering(398) 00:12:00.909 fused_ordering(399) 00:12:00.909 fused_ordering(400) 00:12:00.909 fused_ordering(401) 00:12:00.909 fused_ordering(402) 00:12:00.909 fused_ordering(403) 00:12:00.909 fused_ordering(404) 00:12:00.909 fused_ordering(405) 00:12:00.909 fused_ordering(406) 00:12:00.909 fused_ordering(407) 00:12:00.909 fused_ordering(408) 00:12:00.909 fused_ordering(409) 00:12:00.909 fused_ordering(410) 00:12:01.473 fused_ordering(411) 00:12:01.473 fused_ordering(412) 00:12:01.473 fused_ordering(413) 00:12:01.473 fused_ordering(414) 00:12:01.473 fused_ordering(415) 00:12:01.474 fused_ordering(416) 00:12:01.474 fused_ordering(417) 00:12:01.474 fused_ordering(418) 00:12:01.474 fused_ordering(419) 00:12:01.474 fused_ordering(420) 00:12:01.474 fused_ordering(421) 00:12:01.474 fused_ordering(422) 00:12:01.474 fused_ordering(423) 00:12:01.474 fused_ordering(424) 00:12:01.474 fused_ordering(425) 00:12:01.474 fused_ordering(426) 00:12:01.474 fused_ordering(427) 00:12:01.474 fused_ordering(428) 00:12:01.474 fused_ordering(429) 00:12:01.474 fused_ordering(430) 00:12:01.474 fused_ordering(431) 00:12:01.474 fused_ordering(432) 00:12:01.474 fused_ordering(433) 00:12:01.474 fused_ordering(434) 00:12:01.474 fused_ordering(435) 00:12:01.474 fused_ordering(436) 00:12:01.474 fused_ordering(437) 00:12:01.474 fused_ordering(438) 00:12:01.474 fused_ordering(439) 00:12:01.474 fused_ordering(440) 00:12:01.474 fused_ordering(441) 00:12:01.474 fused_ordering(442) 00:12:01.474 fused_ordering(443) 00:12:01.474 fused_ordering(444) 00:12:01.474 fused_ordering(445) 00:12:01.474 fused_ordering(446) 00:12:01.474 fused_ordering(447) 00:12:01.474 fused_ordering(448) 00:12:01.474 fused_ordering(449) 00:12:01.474 fused_ordering(450) 00:12:01.474 fused_ordering(451) 00:12:01.474 fused_ordering(452) 00:12:01.474 fused_ordering(453) 00:12:01.474 fused_ordering(454) 00:12:01.474 fused_ordering(455) 00:12:01.474 fused_ordering(456) 00:12:01.474 fused_ordering(457) 00:12:01.474 fused_ordering(458) 00:12:01.474 fused_ordering(459) 00:12:01.474 fused_ordering(460) 00:12:01.474 fused_ordering(461) 00:12:01.474 fused_ordering(462) 00:12:01.474 fused_ordering(463) 00:12:01.474 fused_ordering(464) 00:12:01.474 fused_ordering(465) 00:12:01.474 fused_ordering(466) 00:12:01.474 fused_ordering(467) 00:12:01.474 fused_ordering(468) 00:12:01.474 fused_ordering(469) 00:12:01.474 fused_ordering(470) 00:12:01.474 fused_ordering(471) 00:12:01.474 fused_ordering(472) 00:12:01.474 fused_ordering(473) 00:12:01.474 fused_ordering(474) 00:12:01.474 fused_ordering(475) 00:12:01.474 fused_ordering(476) 00:12:01.474 fused_ordering(477) 00:12:01.474 fused_ordering(478) 00:12:01.474 fused_ordering(479) 00:12:01.474 fused_ordering(480) 00:12:01.474 fused_ordering(481) 00:12:01.474 fused_ordering(482) 00:12:01.474 fused_ordering(483) 00:12:01.474 fused_ordering(484) 00:12:01.474 fused_ordering(485) 00:12:01.474 fused_ordering(486) 00:12:01.474 fused_ordering(487) 00:12:01.474 fused_ordering(488) 00:12:01.474 fused_ordering(489) 00:12:01.474 fused_ordering(490) 00:12:01.474 fused_ordering(491) 00:12:01.474 fused_ordering(492) 00:12:01.474 fused_ordering(493) 00:12:01.474 fused_ordering(494) 00:12:01.474 fused_ordering(495) 00:12:01.474 fused_ordering(496) 00:12:01.474 fused_ordering(497) 00:12:01.474 fused_ordering(498) 00:12:01.474 fused_ordering(499) 00:12:01.474 fused_ordering(500) 00:12:01.474 fused_ordering(501) 00:12:01.474 fused_ordering(502) 00:12:01.474 fused_ordering(503) 00:12:01.474 fused_ordering(504) 00:12:01.474 fused_ordering(505) 00:12:01.474 fused_ordering(506) 00:12:01.474 fused_ordering(507) 00:12:01.474 fused_ordering(508) 00:12:01.474 fused_ordering(509) 00:12:01.474 fused_ordering(510) 00:12:01.474 fused_ordering(511) 00:12:01.474 fused_ordering(512) 00:12:01.474 fused_ordering(513) 00:12:01.474 fused_ordering(514) 00:12:01.474 fused_ordering(515) 00:12:01.474 fused_ordering(516) 00:12:01.474 fused_ordering(517) 00:12:01.474 fused_ordering(518) 00:12:01.474 fused_ordering(519) 00:12:01.474 fused_ordering(520) 00:12:01.474 fused_ordering(521) 00:12:01.474 fused_ordering(522) 00:12:01.474 fused_ordering(523) 00:12:01.474 fused_ordering(524) 00:12:01.474 fused_ordering(525) 00:12:01.474 fused_ordering(526) 00:12:01.474 fused_ordering(527) 00:12:01.474 fused_ordering(528) 00:12:01.474 fused_ordering(529) 00:12:01.474 fused_ordering(530) 00:12:01.474 fused_ordering(531) 00:12:01.474 fused_ordering(532) 00:12:01.474 fused_ordering(533) 00:12:01.474 fused_ordering(534) 00:12:01.474 fused_ordering(535) 00:12:01.474 fused_ordering(536) 00:12:01.474 fused_ordering(537) 00:12:01.474 fused_ordering(538) 00:12:01.474 fused_ordering(539) 00:12:01.474 fused_ordering(540) 00:12:01.474 fused_ordering(541) 00:12:01.474 fused_ordering(542) 00:12:01.474 fused_ordering(543) 00:12:01.474 fused_ordering(544) 00:12:01.474 fused_ordering(545) 00:12:01.474 fused_ordering(546) 00:12:01.474 fused_ordering(547) 00:12:01.474 fused_ordering(548) 00:12:01.474 fused_ordering(549) 00:12:01.474 fused_ordering(550) 00:12:01.474 fused_ordering(551) 00:12:01.474 fused_ordering(552) 00:12:01.474 fused_ordering(553) 00:12:01.474 fused_ordering(554) 00:12:01.474 fused_ordering(555) 00:12:01.474 fused_ordering(556) 00:12:01.474 fused_ordering(557) 00:12:01.474 fused_ordering(558) 00:12:01.474 fused_ordering(559) 00:12:01.474 fused_ordering(560) 00:12:01.474 fused_ordering(561) 00:12:01.474 fused_ordering(562) 00:12:01.474 fused_ordering(563) 00:12:01.474 fused_ordering(564) 00:12:01.474 fused_ordering(565) 00:12:01.474 fused_ordering(566) 00:12:01.474 fused_ordering(567) 00:12:01.474 fused_ordering(568) 00:12:01.474 fused_ordering(569) 00:12:01.474 fused_ordering(570) 00:12:01.474 fused_ordering(571) 00:12:01.474 fused_ordering(572) 00:12:01.474 fused_ordering(573) 00:12:01.474 fused_ordering(574) 00:12:01.474 fused_ordering(575) 00:12:01.474 fused_ordering(576) 00:12:01.474 fused_ordering(577) 00:12:01.474 fused_ordering(578) 00:12:01.474 fused_ordering(579) 00:12:01.474 fused_ordering(580) 00:12:01.474 fused_ordering(581) 00:12:01.474 fused_ordering(582) 00:12:01.474 fused_ordering(583) 00:12:01.474 fused_ordering(584) 00:12:01.474 fused_ordering(585) 00:12:01.474 fused_ordering(586) 00:12:01.474 fused_ordering(587) 00:12:01.474 fused_ordering(588) 00:12:01.474 fused_ordering(589) 00:12:01.474 fused_ordering(590) 00:12:01.474 fused_ordering(591) 00:12:01.474 fused_ordering(592) 00:12:01.474 fused_ordering(593) 00:12:01.474 fused_ordering(594) 00:12:01.474 fused_ordering(595) 00:12:01.474 fused_ordering(596) 00:12:01.474 fused_ordering(597) 00:12:01.474 fused_ordering(598) 00:12:01.474 fused_ordering(599) 00:12:01.474 fused_ordering(600) 00:12:01.474 fused_ordering(601) 00:12:01.474 fused_ordering(602) 00:12:01.474 fused_ordering(603) 00:12:01.474 fused_ordering(604) 00:12:01.474 fused_ordering(605) 00:12:01.474 fused_ordering(606) 00:12:01.474 fused_ordering(607) 00:12:01.474 fused_ordering(608) 00:12:01.474 fused_ordering(609) 00:12:01.474 fused_ordering(610) 00:12:01.474 fused_ordering(611) 00:12:01.474 fused_ordering(612) 00:12:01.474 fused_ordering(613) 00:12:01.474 fused_ordering(614) 00:12:01.474 fused_ordering(615) 00:12:02.039 fused_ordering(616) 00:12:02.039 fused_ordering(617) 00:12:02.039 fused_ordering(618) 00:12:02.039 fused_ordering(619) 00:12:02.039 fused_ordering(620) 00:12:02.039 fused_ordering(621) 00:12:02.039 fused_ordering(622) 00:12:02.039 fused_ordering(623) 00:12:02.039 fused_ordering(624) 00:12:02.039 fused_ordering(625) 00:12:02.039 fused_ordering(626) 00:12:02.039 fused_ordering(627) 00:12:02.039 fused_ordering(628) 00:12:02.039 fused_ordering(629) 00:12:02.039 fused_ordering(630) 00:12:02.039 fused_ordering(631) 00:12:02.039 fused_ordering(632) 00:12:02.039 fused_ordering(633) 00:12:02.039 fused_ordering(634) 00:12:02.039 fused_ordering(635) 00:12:02.039 fused_ordering(636) 00:12:02.039 fused_ordering(637) 00:12:02.039 fused_ordering(638) 00:12:02.039 fused_ordering(639) 00:12:02.039 fused_ordering(640) 00:12:02.039 fused_ordering(641) 00:12:02.039 fused_ordering(642) 00:12:02.039 fused_ordering(643) 00:12:02.039 fused_ordering(644) 00:12:02.039 fused_ordering(645) 00:12:02.039 fused_ordering(646) 00:12:02.039 fused_ordering(647) 00:12:02.039 fused_ordering(648) 00:12:02.039 fused_ordering(649) 00:12:02.039 fused_ordering(650) 00:12:02.039 fused_ordering(651) 00:12:02.039 fused_ordering(652) 00:12:02.039 fused_ordering(653) 00:12:02.039 fused_ordering(654) 00:12:02.039 fused_ordering(655) 00:12:02.039 fused_ordering(656) 00:12:02.039 fused_ordering(657) 00:12:02.039 fused_ordering(658) 00:12:02.039 fused_ordering(659) 00:12:02.039 fused_ordering(660) 00:12:02.039 fused_ordering(661) 00:12:02.039 fused_ordering(662) 00:12:02.039 fused_ordering(663) 00:12:02.039 fused_ordering(664) 00:12:02.039 fused_ordering(665) 00:12:02.039 fused_ordering(666) 00:12:02.039 fused_ordering(667) 00:12:02.039 fused_ordering(668) 00:12:02.039 fused_ordering(669) 00:12:02.039 fused_ordering(670) 00:12:02.039 fused_ordering(671) 00:12:02.039 fused_ordering(672) 00:12:02.039 fused_ordering(673) 00:12:02.039 fused_ordering(674) 00:12:02.039 fused_ordering(675) 00:12:02.039 fused_ordering(676) 00:12:02.040 fused_ordering(677) 00:12:02.040 fused_ordering(678) 00:12:02.040 fused_ordering(679) 00:12:02.040 fused_ordering(680) 00:12:02.040 fused_ordering(681) 00:12:02.040 fused_ordering(682) 00:12:02.040 fused_ordering(683) 00:12:02.040 fused_ordering(684) 00:12:02.040 fused_ordering(685) 00:12:02.040 fused_ordering(686) 00:12:02.040 fused_ordering(687) 00:12:02.040 fused_ordering(688) 00:12:02.040 fused_ordering(689) 00:12:02.040 fused_ordering(690) 00:12:02.040 fused_ordering(691) 00:12:02.040 fused_ordering(692) 00:12:02.040 fused_ordering(693) 00:12:02.040 fused_ordering(694) 00:12:02.040 fused_ordering(695) 00:12:02.040 fused_ordering(696) 00:12:02.040 fused_ordering(697) 00:12:02.040 fused_ordering(698) 00:12:02.040 fused_ordering(699) 00:12:02.040 fused_ordering(700) 00:12:02.040 fused_ordering(701) 00:12:02.040 fused_ordering(702) 00:12:02.040 fused_ordering(703) 00:12:02.040 fused_ordering(704) 00:12:02.040 fused_ordering(705) 00:12:02.040 fused_ordering(706) 00:12:02.040 fused_ordering(707) 00:12:02.040 fused_ordering(708) 00:12:02.040 fused_ordering(709) 00:12:02.040 fused_ordering(710) 00:12:02.040 fused_ordering(711) 00:12:02.040 fused_ordering(712) 00:12:02.040 fused_ordering(713) 00:12:02.040 fused_ordering(714) 00:12:02.040 fused_ordering(715) 00:12:02.040 fused_ordering(716) 00:12:02.040 fused_ordering(717) 00:12:02.040 fused_ordering(718) 00:12:02.040 fused_ordering(719) 00:12:02.040 fused_ordering(720) 00:12:02.040 fused_ordering(721) 00:12:02.040 fused_ordering(722) 00:12:02.040 fused_ordering(723) 00:12:02.040 fused_ordering(724) 00:12:02.040 fused_ordering(725) 00:12:02.040 fused_ordering(726) 00:12:02.040 fused_ordering(727) 00:12:02.040 fused_ordering(728) 00:12:02.040 fused_ordering(729) 00:12:02.040 fused_ordering(730) 00:12:02.040 fused_ordering(731) 00:12:02.040 fused_ordering(732) 00:12:02.040 fused_ordering(733) 00:12:02.040 fused_ordering(734) 00:12:02.040 fused_ordering(735) 00:12:02.040 fused_ordering(736) 00:12:02.040 fused_ordering(737) 00:12:02.040 fused_ordering(738) 00:12:02.040 fused_ordering(739) 00:12:02.040 fused_ordering(740) 00:12:02.040 fused_ordering(741) 00:12:02.040 fused_ordering(742) 00:12:02.040 fused_ordering(743) 00:12:02.040 fused_ordering(744) 00:12:02.040 fused_ordering(745) 00:12:02.040 fused_ordering(746) 00:12:02.040 fused_ordering(747) 00:12:02.040 fused_ordering(748) 00:12:02.040 fused_ordering(749) 00:12:02.040 fused_ordering(750) 00:12:02.040 fused_ordering(751) 00:12:02.040 fused_ordering(752) 00:12:02.040 fused_ordering(753) 00:12:02.040 fused_ordering(754) 00:12:02.040 fused_ordering(755) 00:12:02.040 fused_ordering(756) 00:12:02.040 fused_ordering(757) 00:12:02.040 fused_ordering(758) 00:12:02.040 fused_ordering(759) 00:12:02.040 fused_ordering(760) 00:12:02.040 fused_ordering(761) 00:12:02.040 fused_ordering(762) 00:12:02.040 fused_ordering(763) 00:12:02.040 fused_ordering(764) 00:12:02.040 fused_ordering(765) 00:12:02.040 fused_ordering(766) 00:12:02.040 fused_ordering(767) 00:12:02.040 fused_ordering(768) 00:12:02.040 fused_ordering(769) 00:12:02.040 fused_ordering(770) 00:12:02.040 fused_ordering(771) 00:12:02.040 fused_ordering(772) 00:12:02.040 fused_ordering(773) 00:12:02.040 fused_ordering(774) 00:12:02.040 fused_ordering(775) 00:12:02.040 fused_ordering(776) 00:12:02.040 fused_ordering(777) 00:12:02.040 fused_ordering(778) 00:12:02.040 fused_ordering(779) 00:12:02.040 fused_ordering(780) 00:12:02.040 fused_ordering(781) 00:12:02.040 fused_ordering(782) 00:12:02.040 fused_ordering(783) 00:12:02.040 fused_ordering(784) 00:12:02.040 fused_ordering(785) 00:12:02.040 fused_ordering(786) 00:12:02.040 fused_ordering(787) 00:12:02.040 fused_ordering(788) 00:12:02.040 fused_ordering(789) 00:12:02.040 fused_ordering(790) 00:12:02.040 fused_ordering(791) 00:12:02.040 fused_ordering(792) 00:12:02.040 fused_ordering(793) 00:12:02.040 fused_ordering(794) 00:12:02.040 fused_ordering(795) 00:12:02.040 fused_ordering(796) 00:12:02.040 fused_ordering(797) 00:12:02.040 fused_ordering(798) 00:12:02.040 fused_ordering(799) 00:12:02.040 fused_ordering(800) 00:12:02.040 fused_ordering(801) 00:12:02.040 fused_ordering(802) 00:12:02.040 fused_ordering(803) 00:12:02.040 fused_ordering(804) 00:12:02.040 fused_ordering(805) 00:12:02.040 fused_ordering(806) 00:12:02.040 fused_ordering(807) 00:12:02.040 fused_ordering(808) 00:12:02.040 fused_ordering(809) 00:12:02.040 fused_ordering(810) 00:12:02.040 fused_ordering(811) 00:12:02.040 fused_ordering(812) 00:12:02.040 fused_ordering(813) 00:12:02.040 fused_ordering(814) 00:12:02.040 fused_ordering(815) 00:12:02.040 fused_ordering(816) 00:12:02.040 fused_ordering(817) 00:12:02.040 fused_ordering(818) 00:12:02.040 fused_ordering(819) 00:12:02.040 fused_ordering(820) 00:12:02.604 fused_ordering(821) 00:12:02.604 fused_ordering(822) 00:12:02.604 fused_ordering(823) 00:12:02.604 fused_ordering(824) 00:12:02.604 fused_ordering(825) 00:12:02.604 fused_ordering(826) 00:12:02.604 fused_ordering(827) 00:12:02.604 fused_ordering(828) 00:12:02.604 fused_ordering(829) 00:12:02.604 fused_ordering(830) 00:12:02.604 fused_ordering(831) 00:12:02.604 fused_ordering(832) 00:12:02.604 fused_ordering(833) 00:12:02.604 fused_ordering(834) 00:12:02.604 fused_ordering(835) 00:12:02.604 fused_ordering(836) 00:12:02.604 fused_ordering(837) 00:12:02.604 fused_ordering(838) 00:12:02.604 fused_ordering(839) 00:12:02.604 fused_ordering(840) 00:12:02.604 fused_ordering(841) 00:12:02.604 fused_ordering(842) 00:12:02.604 fused_ordering(843) 00:12:02.604 fused_ordering(844) 00:12:02.604 fused_ordering(845) 00:12:02.604 fused_ordering(846) 00:12:02.604 fused_ordering(847) 00:12:02.604 fused_ordering(848) 00:12:02.604 fused_ordering(849) 00:12:02.604 fused_ordering(850) 00:12:02.604 fused_ordering(851) 00:12:02.604 fused_ordering(852) 00:12:02.604 fused_ordering(853) 00:12:02.604 fused_ordering(854) 00:12:02.604 fused_ordering(855) 00:12:02.604 fused_ordering(856) 00:12:02.604 fused_ordering(857) 00:12:02.604 fused_ordering(858) 00:12:02.604 fused_ordering(859) 00:12:02.605 fused_ordering(860) 00:12:02.605 fused_ordering(861) 00:12:02.605 fused_ordering(862) 00:12:02.605 fused_ordering(863) 00:12:02.605 fused_ordering(864) 00:12:02.605 fused_ordering(865) 00:12:02.605 fused_ordering(866) 00:12:02.605 fused_ordering(867) 00:12:02.605 fused_ordering(868) 00:12:02.605 fused_ordering(869) 00:12:02.605 fused_ordering(870) 00:12:02.605 fused_ordering(871) 00:12:02.605 fused_ordering(872) 00:12:02.605 fused_ordering(873) 00:12:02.605 fused_ordering(874) 00:12:02.605 fused_ordering(875) 00:12:02.605 fused_ordering(876) 00:12:02.605 fused_ordering(877) 00:12:02.605 fused_ordering(878) 00:12:02.605 fused_ordering(879) 00:12:02.605 fused_ordering(880) 00:12:02.605 fused_ordering(881) 00:12:02.605 fused_ordering(882) 00:12:02.605 fused_ordering(883) 00:12:02.605 fused_ordering(884) 00:12:02.605 fused_ordering(885) 00:12:02.605 fused_ordering(886) 00:12:02.605 fused_ordering(887) 00:12:02.605 fused_ordering(888) 00:12:02.605 fused_ordering(889) 00:12:02.605 fused_ordering(890) 00:12:02.605 fused_ordering(891) 00:12:02.605 fused_ordering(892) 00:12:02.605 fused_ordering(893) 00:12:02.605 fused_ordering(894) 00:12:02.605 fused_ordering(895) 00:12:02.605 fused_ordering(896) 00:12:02.605 fused_ordering(897) 00:12:02.605 fused_ordering(898) 00:12:02.605 fused_ordering(899) 00:12:02.605 fused_ordering(900) 00:12:02.605 fused_ordering(901) 00:12:02.605 fused_ordering(902) 00:12:02.605 fused_ordering(903) 00:12:02.605 fused_ordering(904) 00:12:02.605 fused_ordering(905) 00:12:02.605 fused_ordering(906) 00:12:02.605 fused_ordering(907) 00:12:02.605 fused_ordering(908) 00:12:02.605 fused_ordering(909) 00:12:02.605 fused_ordering(910) 00:12:02.605 fused_ordering(911) 00:12:02.605 fused_ordering(912) 00:12:02.605 fused_ordering(913) 00:12:02.605 fused_ordering(914) 00:12:02.605 fused_ordering(915) 00:12:02.605 fused_ordering(916) 00:12:02.605 fused_ordering(917) 00:12:02.605 fused_ordering(918) 00:12:02.605 fused_ordering(919) 00:12:02.605 fused_ordering(920) 00:12:02.605 fused_ordering(921) 00:12:02.605 fused_ordering(922) 00:12:02.605 fused_ordering(923) 00:12:02.605 fused_ordering(924) 00:12:02.605 fused_ordering(925) 00:12:02.605 fused_ordering(926) 00:12:02.605 fused_ordering(927) 00:12:02.605 fused_ordering(928) 00:12:02.605 fused_ordering(929) 00:12:02.605 fused_ordering(930) 00:12:02.605 fused_ordering(931) 00:12:02.605 fused_ordering(932) 00:12:02.605 fused_ordering(933) 00:12:02.605 fused_ordering(934) 00:12:02.605 fused_ordering(935) 00:12:02.605 fused_ordering(936) 00:12:02.605 fused_ordering(937) 00:12:02.605 fused_ordering(938) 00:12:02.605 fused_ordering(939) 00:12:02.605 fused_ordering(940) 00:12:02.605 fused_ordering(941) 00:12:02.605 fused_ordering(942) 00:12:02.605 fused_ordering(943) 00:12:02.605 fused_ordering(944) 00:12:02.605 fused_ordering(945) 00:12:02.605 fused_ordering(946) 00:12:02.605 fused_ordering(947) 00:12:02.605 fused_ordering(948) 00:12:02.605 fused_ordering(949) 00:12:02.605 fused_ordering(950) 00:12:02.605 fused_ordering(951) 00:12:02.605 fused_ordering(952) 00:12:02.605 fused_ordering(953) 00:12:02.605 fused_ordering(954) 00:12:02.605 fused_ordering(955) 00:12:02.605 fused_ordering(956) 00:12:02.605 fused_ordering(957) 00:12:02.605 fused_ordering(958) 00:12:02.605 fused_ordering(959) 00:12:02.605 fused_ordering(960) 00:12:02.605 fused_ordering(961) 00:12:02.605 fused_ordering(962) 00:12:02.605 fused_ordering(963) 00:12:02.605 fused_ordering(964) 00:12:02.605 fused_ordering(965) 00:12:02.605 fused_ordering(966) 00:12:02.605 fused_ordering(967) 00:12:02.605 fused_ordering(968) 00:12:02.605 fused_ordering(969) 00:12:02.605 fused_ordering(970) 00:12:02.605 fused_ordering(971) 00:12:02.605 fused_ordering(972) 00:12:02.605 fused_ordering(973) 00:12:02.605 fused_ordering(974) 00:12:02.605 fused_ordering(975) 00:12:02.605 fused_ordering(976) 00:12:02.605 fused_ordering(977) 00:12:02.605 fused_ordering(978) 00:12:02.605 fused_ordering(979) 00:12:02.605 fused_ordering(980) 00:12:02.605 fused_ordering(981) 00:12:02.605 fused_ordering(982) 00:12:02.605 fused_ordering(983) 00:12:02.605 fused_ordering(984) 00:12:02.605 fused_ordering(985) 00:12:02.605 fused_ordering(986) 00:12:02.605 fused_ordering(987) 00:12:02.605 fused_ordering(988) 00:12:02.605 fused_ordering(989) 00:12:02.605 fused_ordering(990) 00:12:02.605 fused_ordering(991) 00:12:02.605 fused_ordering(992) 00:12:02.605 fused_ordering(993) 00:12:02.605 fused_ordering(994) 00:12:02.605 fused_ordering(995) 00:12:02.605 fused_ordering(996) 00:12:02.605 fused_ordering(997) 00:12:02.605 fused_ordering(998) 00:12:02.605 fused_ordering(999) 00:12:02.605 fused_ordering(1000) 00:12:02.605 fused_ordering(1001) 00:12:02.605 fused_ordering(1002) 00:12:02.605 fused_ordering(1003) 00:12:02.605 fused_ordering(1004) 00:12:02.605 fused_ordering(1005) 00:12:02.605 fused_ordering(1006) 00:12:02.605 fused_ordering(1007) 00:12:02.605 fused_ordering(1008) 00:12:02.605 fused_ordering(1009) 00:12:02.605 fused_ordering(1010) 00:12:02.605 fused_ordering(1011) 00:12:02.605 fused_ordering(1012) 00:12:02.605 fused_ordering(1013) 00:12:02.605 fused_ordering(1014) 00:12:02.605 fused_ordering(1015) 00:12:02.605 fused_ordering(1016) 00:12:02.605 fused_ordering(1017) 00:12:02.605 fused_ordering(1018) 00:12:02.605 fused_ordering(1019) 00:12:02.605 fused_ordering(1020) 00:12:02.605 fused_ordering(1021) 00:12:02.605 fused_ordering(1022) 00:12:02.605 fused_ordering(1023) 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.605 rmmod nvme_tcp 00:12:02.605 rmmod nvme_fabrics 00:12:02.605 rmmod nvme_keyring 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2358129 ']' 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2358129 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2358129 ']' 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2358129 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2358129 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2358129' 00:12:02.605 killing process with pid 2358129 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2358129 00:12:02.605 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2358129 00:12:02.865 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:02.865 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:02.865 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:02.865 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:02.865 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:02.865 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:02.865 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:02.865 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:02.865 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:02.865 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.865 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.865 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.772 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:04.772 00:12:04.772 real 0m7.586s 00:12:04.772 user 0m5.065s 00:12:04.772 sys 0m3.236s 00:12:04.772 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.772 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.772 ************************************ 00:12:04.772 END TEST nvmf_fused_ordering 00:12:04.772 ************************************ 00:12:04.772 03:59:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:04.772 03:59:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:04.772 03:59:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.772 03:59:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.032 ************************************ 00:12:05.032 START TEST nvmf_ns_masking 00:12:05.032 ************************************ 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:05.032 * Looking for test storage... 00:12:05.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:05.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.032 --rc genhtml_branch_coverage=1 00:12:05.032 --rc genhtml_function_coverage=1 00:12:05.032 --rc genhtml_legend=1 00:12:05.032 --rc geninfo_all_blocks=1 00:12:05.032 --rc geninfo_unexecuted_blocks=1 00:12:05.032 00:12:05.032 ' 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:05.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.032 --rc genhtml_branch_coverage=1 00:12:05.032 --rc genhtml_function_coverage=1 00:12:05.032 --rc genhtml_legend=1 00:12:05.032 --rc geninfo_all_blocks=1 00:12:05.032 --rc geninfo_unexecuted_blocks=1 00:12:05.032 00:12:05.032 ' 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:05.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.032 --rc genhtml_branch_coverage=1 00:12:05.032 --rc genhtml_function_coverage=1 00:12:05.032 --rc genhtml_legend=1 00:12:05.032 --rc geninfo_all_blocks=1 00:12:05.032 --rc geninfo_unexecuted_blocks=1 00:12:05.032 00:12:05.032 ' 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:05.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.032 --rc genhtml_branch_coverage=1 00:12:05.032 --rc genhtml_function_coverage=1 00:12:05.032 --rc genhtml_legend=1 00:12:05.032 --rc geninfo_all_blocks=1 00:12:05.032 --rc geninfo_unexecuted_blocks=1 00:12:05.032 00:12:05.032 ' 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.032 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7ba42a92-96df-42df-997a-0fb38a507fe9 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a3d682c1-7d99-45a5-b7a3-1d2931807297 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=1c8d70d3-58c9-4805-b193-1291dc5edf83 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.033 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:06.935 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.935 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:06.935 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:06.935 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:07.193 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:07.193 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.193 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:07.194 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:07.194 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:12:07.194 00:12:07.194 --- 10.0.0.2 ping statistics --- 00:12:07.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.194 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:12:07.194 00:12:07.194 --- 10.0.0.1 ping statistics --- 00:12:07.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.194 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2360571 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2360571 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2360571 ']' 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.194 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:07.194 [2024-12-10 04:00:01.528876] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:07.194 [2024-12-10 04:00:01.528946] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.452 [2024-12-10 04:00:01.602691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.452 [2024-12-10 04:00:01.657433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.452 [2024-12-10 04:00:01.657492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.452 [2024-12-10 04:00:01.657516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.452 [2024-12-10 04:00:01.657542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.452 [2024-12-10 04:00:01.657560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.452 [2024-12-10 04:00:01.658243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.452 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.452 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:07.452 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:07.452 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:07.452 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:07.452 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.452 04:00:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:07.710 [2024-12-10 04:00:02.055002] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.710 04:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:07.710 04:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:07.710 04:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:07.991 Malloc1 00:12:08.305 04:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:08.305 Malloc2 00:12:08.562 04:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:08.820 04:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:09.079 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.337 [2024-12-10 04:00:03.605360] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.337 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:09.337 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1c8d70d3-58c9-4805-b193-1291dc5edf83 -a 10.0.0.2 -s 4420 -i 4 00:12:09.595 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:09.595 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:09.595 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.595 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:09.595 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:11.493 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:11.493 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:11.493 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.493 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:11.493 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.493 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:11.493 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:11.493 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:11.751 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:11.751 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:11.751 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:11.751 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.751 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:11.751 [ 0]:0x1 00:12:11.751 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:11.751 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.751 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7eaa3ddeb7c742f38a6348a73ced9233 00:12:11.751 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7eaa3ddeb7c742f38a6348a73ced9233 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.751 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:12.009 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:12.009 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:12.009 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:12.009 [ 0]:0x1 00:12:12.009 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:12.009 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:12.009 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7eaa3ddeb7c742f38a6348a73ced9233 00:12:12.009 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7eaa3ddeb7c742f38a6348a73ced9233 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:12.009 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:12.009 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:12.009 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:12.009 [ 1]:0x2 00:12:12.009 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:12.009 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:12.009 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10355702ad5a4003a18f2bbb103b4997 00:12:12.009 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10355702ad5a4003a18f2bbb103b4997 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:12.009 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:12.009 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.267 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.525 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:12.783 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:12.783 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1c8d70d3-58c9-4805-b193-1291dc5edf83 -a 10.0.0.2 -s 4420 -i 4 00:12:12.783 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:12.783 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:12.783 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.783 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:12.783 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:12.783 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:15.309 [ 0]:0x2 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10355702ad5a4003a18f2bbb103b4997 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10355702ad5a4003a18f2bbb103b4997 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:15.309 [ 0]:0x1 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7eaa3ddeb7c742f38a6348a73ced9233 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7eaa3ddeb7c742f38a6348a73ced9233 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:15.309 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:15.309 [ 1]:0x2 00:12:15.566 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:15.566 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.566 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10355702ad5a4003a18f2bbb103b4997 00:12:15.566 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10355702ad5a4003a18f2bbb103b4997 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.566 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:15.824 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:15.824 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:15.824 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:15.824 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:15.824 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.824 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:15.824 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.825 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:15.825 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:15.825 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:15.825 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:15.825 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.825 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:15.825 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.825 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:15.825 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:15.825 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:15.825 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:15.825 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:15.825 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:15.825 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:15.825 [ 0]:0x2 00:12:15.825 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:15.825 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.825 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10355702ad5a4003a18f2bbb103b4997 00:12:15.825 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10355702ad5a4003a18f2bbb103b4997 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.825 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:15.825 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.825 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:16.082 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:16.082 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1c8d70d3-58c9-4805-b193-1291dc5edf83 -a 10.0.0.2 -s 4420 -i 4 00:12:16.340 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:16.340 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:16.340 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.340 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:16.340 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:16.340 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:18.237 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:18.237 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:18.237 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:18.237 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:18.237 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:18.237 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:18.237 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:18.237 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:18.494 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:18.494 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:18.494 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:18.494 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:18.494 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:18.494 [ 0]:0x1 00:12:18.494 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:18.494 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:18.494 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7eaa3ddeb7c742f38a6348a73ced9233 00:12:18.494 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7eaa3ddeb7c742f38a6348a73ced9233 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:18.494 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:18.494 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:18.494 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:18.494 [ 1]:0x2 00:12:18.494 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:18.494 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:18.752 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10355702ad5a4003a18f2bbb103b4997 00:12:18.752 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10355702ad5a4003a18f2bbb103b4997 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:18.752 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:19.010 [ 0]:0x2 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10355702ad5a4003a18f2bbb103b4997 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10355702ad5a4003a18f2bbb103b4997 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:19.010 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:19.268 [2024-12-10 04:00:13.511026] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:19.268 request: 00:12:19.268 { 00:12:19.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:19.268 "nsid": 2, 00:12:19.268 "host": "nqn.2016-06.io.spdk:host1", 00:12:19.268 "method": "nvmf_ns_remove_host", 00:12:19.268 "req_id": 1 00:12:19.268 } 00:12:19.268 Got JSON-RPC error response 00:12:19.268 response: 00:12:19.268 { 00:12:19.268 "code": -32602, 00:12:19.268 "message": "Invalid parameters" 00:12:19.268 } 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:19.268 [ 0]:0x2 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:19.268 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:19.526 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10355702ad5a4003a18f2bbb103b4997 00:12:19.526 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10355702ad5a4003a18f2bbb103b4997 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:19.526 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:19.526 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.526 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2362720 00:12:19.526 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.526 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2362720 /var/tmp/host.sock 00:12:19.526 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2362720 ']' 00:12:19.526 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:19.526 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:19.526 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.526 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:19.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:19.526 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.526 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:19.526 [2024-12-10 04:00:13.862686] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:19.526 [2024-12-10 04:00:13.862781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362720 ] 00:12:19.784 [2024-12-10 04:00:13.928773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.784 [2024-12-10 04:00:13.985313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.042 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.042 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:20.042 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.299 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:20.556 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7ba42a92-96df-42df-997a-0fb38a507fe9 00:12:20.556 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:20.557 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7BA42A9296DF42DF997A0FB38A507FE9 -i 00:12:20.814 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a3d682c1-7d99-45a5-b7a3-1d2931807297 00:12:20.814 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:20.814 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A3D682C17D9945A5B7A31D2931807297 -i 00:12:21.071 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:21.328 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:21.585 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:21.585 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:22.150 nvme0n1 00:12:22.150 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:22.150 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:22.715 nvme1n2 00:12:22.715 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:22.715 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:22.715 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:22.715 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:22.715 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:22.972 04:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:22.972 04:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:22.972 04:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:22.972 04:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:23.229 04:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7ba42a92-96df-42df-997a-0fb38a507fe9 == \7\b\a\4\2\a\9\2\-\9\6\d\f\-\4\2\d\f\-\9\9\7\a\-\0\f\b\3\8\a\5\0\7\f\e\9 ]] 00:12:23.230 04:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:23.230 04:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:23.230 04:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:23.486 04:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a3d682c1-7d99-45a5-b7a3-1d2931807297 == \a\3\d\6\8\2\c\1\-\7\d\9\9\-\4\5\a\5\-\b\7\a\3\-\1\d\2\9\3\1\8\0\7\2\9\7 ]] 00:12:23.486 04:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.744 04:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:24.002 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 7ba42a92-96df-42df-997a-0fb38a507fe9 00:12:24.002 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:24.002 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7BA42A9296DF42DF997A0FB38A507FE9 00:12:24.002 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:24.002 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7BA42A9296DF42DF997A0FB38A507FE9 00:12:24.002 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.002 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:24.002 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.002 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:24.002 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.002 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:24.002 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.002 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:24.002 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7BA42A9296DF42DF997A0FB38A507FE9 00:12:24.259 [2024-12-10 04:00:18.501715] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:24.259 [2024-12-10 04:00:18.501753] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:24.259 [2024-12-10 04:00:18.501768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.259 request: 00:12:24.260 { 00:12:24.260 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:24.260 "namespace": { 00:12:24.260 "bdev_name": "invalid", 00:12:24.260 "nsid": 1, 00:12:24.260 "nguid": "7BA42A9296DF42DF997A0FB38A507FE9", 00:12:24.260 "no_auto_visible": false, 00:12:24.260 "hide_metadata": false 00:12:24.260 }, 00:12:24.260 "method": "nvmf_subsystem_add_ns", 00:12:24.260 "req_id": 1 00:12:24.260 } 00:12:24.260 Got JSON-RPC error response 00:12:24.260 response: 00:12:24.260 { 00:12:24.260 "code": -32602, 00:12:24.260 "message": "Invalid parameters" 00:12:24.260 } 00:12:24.260 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:24.260 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:24.260 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:24.260 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:24.260 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 7ba42a92-96df-42df-997a-0fb38a507fe9 00:12:24.260 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:24.260 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7BA42A9296DF42DF997A0FB38A507FE9 -i 00:12:24.517 04:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:26.415 04:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:26.415 04:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:26.415 04:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:26.979 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:26.979 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2362720 00:12:26.979 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2362720 ']' 00:12:26.979 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2362720 00:12:26.979 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:26.979 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.979 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2362720 00:12:26.979 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:26.979 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:26.979 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2362720' 00:12:26.979 killing process with pid 2362720 00:12:26.979 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2362720 00:12:26.979 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2362720 00:12:27.236 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:27.494 rmmod nvme_tcp 00:12:27.494 rmmod nvme_fabrics 00:12:27.494 rmmod nvme_keyring 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2360571 ']' 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2360571 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2360571 ']' 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2360571 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.494 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2360571 00:12:27.752 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.752 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.752 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2360571' 00:12:27.752 killing process with pid 2360571 00:12:27.752 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2360571 00:12:27.752 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2360571 00:12:28.073 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:28.073 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:28.073 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:28.073 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:28.073 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:28.073 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:28.073 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:28.073 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:28.073 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:28.073 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.073 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.073 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.987 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:29.987 00:12:29.987 real 0m25.047s 00:12:29.987 user 0m36.240s 00:12:29.987 sys 0m4.650s 00:12:29.987 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.987 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:29.987 ************************************ 00:12:29.987 END TEST nvmf_ns_masking 00:12:29.987 ************************************ 00:12:29.987 04:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:29.987 04:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:29.987 04:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:29.987 04:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.987 04:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:29.987 ************************************ 00:12:29.987 START TEST nvmf_nvme_cli 00:12:29.987 ************************************ 00:12:29.987 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:29.987 * Looking for test storage... 00:12:29.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.987 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:29.987 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:12:29.987 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:30.246 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:30.246 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.246 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.246 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.246 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.246 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.246 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.246 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:30.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.247 --rc genhtml_branch_coverage=1 00:12:30.247 --rc genhtml_function_coverage=1 00:12:30.247 --rc genhtml_legend=1 00:12:30.247 --rc geninfo_all_blocks=1 00:12:30.247 --rc geninfo_unexecuted_blocks=1 00:12:30.247 00:12:30.247 ' 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:30.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.247 --rc genhtml_branch_coverage=1 00:12:30.247 --rc genhtml_function_coverage=1 00:12:30.247 --rc genhtml_legend=1 00:12:30.247 --rc geninfo_all_blocks=1 00:12:30.247 --rc geninfo_unexecuted_blocks=1 00:12:30.247 00:12:30.247 ' 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:30.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.247 --rc genhtml_branch_coverage=1 00:12:30.247 --rc genhtml_function_coverage=1 00:12:30.247 --rc genhtml_legend=1 00:12:30.247 --rc geninfo_all_blocks=1 00:12:30.247 --rc geninfo_unexecuted_blocks=1 00:12:30.247 00:12:30.247 ' 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:30.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.247 --rc genhtml_branch_coverage=1 00:12:30.247 --rc genhtml_function_coverage=1 00:12:30.247 --rc genhtml_legend=1 00:12:30.247 --rc geninfo_all_blocks=1 00:12:30.247 --rc geninfo_unexecuted_blocks=1 00:12:30.247 00:12:30.247 ' 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:30.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.247 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.248 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:30.248 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:30.248 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:30.248 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:32.150 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:32.151 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:32.151 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:32.151 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:32.151 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.151 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:32.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:12:32.410 00:12:32.410 --- 10.0.0.2 ping statistics --- 00:12:32.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.410 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:12:32.410 00:12:32.410 --- 10.0.0.1 ping statistics --- 00:12:32.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.410 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2365637 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2365637 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2365637 ']' 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.410 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.410 [2024-12-10 04:00:26.736887] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:32.410 [2024-12-10 04:00:26.736961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.669 [2024-12-10 04:00:26.811208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.669 [2024-12-10 04:00:26.870609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.669 [2024-12-10 04:00:26.870664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.669 [2024-12-10 04:00:26.870692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.669 [2024-12-10 04:00:26.870703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.669 [2024-12-10 04:00:26.870713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.669 [2024-12-10 04:00:26.872266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.669 [2024-12-10 04:00:26.872327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.669 [2024-12-10 04:00:26.872393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.669 [2024-12-10 04:00:26.872396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.669 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.669 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:12:32.669 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:32.669 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:32.669 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.669 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.669 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.669 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.669 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.669 [2024-12-10 04:00:27.010609] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.669 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.669 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:32.669 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.669 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.927 Malloc0 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.927 Malloc1 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.927 [2024-12-10 04:00:27.111675] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.927 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.928 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:32.928 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.928 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.928 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.928 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:32.928 00:12:32.928 Discovery Log Number of Records 2, Generation counter 2 00:12:32.928 =====Discovery Log Entry 0====== 00:12:32.928 trtype: tcp 00:12:32.928 adrfam: ipv4 00:12:32.928 subtype: current discovery subsystem 00:12:32.928 treq: not required 00:12:32.928 portid: 0 00:12:32.928 trsvcid: 4420 00:12:32.928 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:32.928 traddr: 10.0.0.2 00:12:32.928 eflags: explicit discovery connections, duplicate discovery information 00:12:32.928 sectype: none 00:12:32.928 =====Discovery Log Entry 1====== 00:12:32.928 trtype: tcp 00:12:32.928 adrfam: ipv4 00:12:32.928 subtype: nvme subsystem 00:12:32.928 treq: not required 00:12:32.928 portid: 0 00:12:32.928 trsvcid: 4420 00:12:32.928 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:32.928 traddr: 10.0.0.2 00:12:32.928 eflags: none 00:12:32.928 sectype: none 00:12:32.928 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:32.928 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:32.928 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:32.928 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:32.928 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:32.928 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:32.928 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:32.928 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:32.928 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:32.928 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:32.928 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.861 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:33.862 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:12:33.862 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.862 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:33.862 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:33.862 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:35.761 /dev/nvme0n2 ]] 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:35.761 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:35.761 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:35.762 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:35.762 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:35.762 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:35.762 rmmod nvme_tcp 00:12:36.020 rmmod nvme_fabrics 00:12:36.020 rmmod nvme_keyring 00:12:36.020 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:36.020 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:36.020 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:36.020 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2365637 ']' 00:12:36.020 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2365637 00:12:36.020 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2365637 ']' 00:12:36.020 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2365637 00:12:36.020 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:12:36.020 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.020 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2365637 00:12:36.020 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.020 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.020 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2365637' 00:12:36.020 killing process with pid 2365637 00:12:36.020 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2365637 00:12:36.020 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2365637 00:12:36.278 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:36.278 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:36.278 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:36.278 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:12:36.278 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:12:36.278 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:36.278 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:12:36.278 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:36.278 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:36.278 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.278 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.278 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.183 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:38.183 00:12:38.183 real 0m8.307s 00:12:38.183 user 0m15.070s 00:12:38.183 sys 0m2.339s 00:12:38.183 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.183 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:38.183 ************************************ 00:12:38.183 END TEST nvmf_nvme_cli 00:12:38.183 ************************************ 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:38.442 ************************************ 00:12:38.442 START TEST nvmf_vfio_user 00:12:38.442 ************************************ 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:38.442 * Looking for test storage... 00:12:38.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:38.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.442 --rc genhtml_branch_coverage=1 00:12:38.442 --rc genhtml_function_coverage=1 00:12:38.442 --rc genhtml_legend=1 00:12:38.442 --rc geninfo_all_blocks=1 00:12:38.442 --rc geninfo_unexecuted_blocks=1 00:12:38.442 00:12:38.442 ' 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:38.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.442 --rc genhtml_branch_coverage=1 00:12:38.442 --rc genhtml_function_coverage=1 00:12:38.442 --rc genhtml_legend=1 00:12:38.442 --rc geninfo_all_blocks=1 00:12:38.442 --rc geninfo_unexecuted_blocks=1 00:12:38.442 00:12:38.442 ' 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:38.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.442 --rc genhtml_branch_coverage=1 00:12:38.442 --rc genhtml_function_coverage=1 00:12:38.442 --rc genhtml_legend=1 00:12:38.442 --rc geninfo_all_blocks=1 00:12:38.442 --rc geninfo_unexecuted_blocks=1 00:12:38.442 00:12:38.442 ' 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:38.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.442 --rc genhtml_branch_coverage=1 00:12:38.442 --rc genhtml_function_coverage=1 00:12:38.442 --rc genhtml_legend=1 00:12:38.442 --rc geninfo_all_blocks=1 00:12:38.442 --rc geninfo_unexecuted_blocks=1 00:12:38.442 00:12:38.442 ' 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:38.442 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:38.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2366449 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2366449' 00:12:38.443 Process pid: 2366449 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2366449 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2366449 ']' 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.443 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:38.702 [2024-12-10 04:00:32.824382] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:38.702 [2024-12-10 04:00:32.824469] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.702 [2024-12-10 04:00:32.891387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.702 [2024-12-10 04:00:32.951186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.702 [2024-12-10 04:00:32.951231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.702 [2024-12-10 04:00:32.951258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.702 [2024-12-10 04:00:32.951270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.702 [2024-12-10 04:00:32.951279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.702 [2024-12-10 04:00:32.952799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.702 [2024-12-10 04:00:32.952860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.702 [2024-12-10 04:00:32.952910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.702 [2024-12-10 04:00:32.952914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.959 04:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.959 04:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:12:38.959 04:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:39.892 04:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:40.150 04:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:40.150 04:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:40.150 04:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:40.150 04:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:40.150 04:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:40.408 Malloc1 00:12:40.665 04:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:40.925 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:41.184 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:41.442 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:41.442 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:41.442 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:41.700 Malloc2 00:12:41.700 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:41.957 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:42.215 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:42.473 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:42.473 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:42.473 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:42.473 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:42.473 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:42.473 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:42.473 [2024-12-10 04:00:36.781000] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:42.473 [2024-12-10 04:00:36.781045] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366983 ] 00:12:42.473 [2024-12-10 04:00:36.829188] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:42.473 [2024-12-10 04:00:36.842046] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:42.473 [2024-12-10 04:00:36.842078] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f85e239c000 00:12:42.473 [2024-12-10 04:00:36.843036] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.473 [2024-12-10 04:00:36.844034] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.473 [2024-12-10 04:00:36.845038] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.473 [2024-12-10 04:00:36.846046] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:42.473 [2024-12-10 04:00:36.847049] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:42.473 [2024-12-10 04:00:36.848056] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.473 [2024-12-10 04:00:36.849062] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:42.473 [2024-12-10 04:00:36.850066] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.473 [2024-12-10 04:00:36.851074] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:42.473 [2024-12-10 04:00:36.851094] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f85e2391000 00:12:42.473 [2024-12-10 04:00:36.852255] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:42.733 [2024-12-10 04:00:36.868036] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:42.733 [2024-12-10 04:00:36.868088] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:12:42.733 [2024-12-10 04:00:36.870174] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:42.733 [2024-12-10 04:00:36.870233] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:42.733 [2024-12-10 04:00:36.870331] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:12:42.733 [2024-12-10 04:00:36.870364] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:12:42.733 [2024-12-10 04:00:36.870376] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:12:42.733 [2024-12-10 04:00:36.871170] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:42.733 [2024-12-10 04:00:36.871196] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:12:42.733 [2024-12-10 04:00:36.871210] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:12:42.733 [2024-12-10 04:00:36.872173] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:42.733 [2024-12-10 04:00:36.872194] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:12:42.733 [2024-12-10 04:00:36.872208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:42.733 [2024-12-10 04:00:36.873177] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:42.733 [2024-12-10 04:00:36.873196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:42.733 [2024-12-10 04:00:36.874178] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:42.733 [2024-12-10 04:00:36.874198] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:42.733 [2024-12-10 04:00:36.874207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:42.733 [2024-12-10 04:00:36.874219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:42.733 [2024-12-10 04:00:36.874329] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:12:42.733 [2024-12-10 04:00:36.874337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:42.733 [2024-12-10 04:00:36.874346] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:42.733 [2024-12-10 04:00:36.875188] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:42.733 [2024-12-10 04:00:36.876191] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:42.733 [2024-12-10 04:00:36.877197] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:42.733 [2024-12-10 04:00:36.878195] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:42.733 [2024-12-10 04:00:36.878319] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:42.733 [2024-12-10 04:00:36.879209] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:42.733 [2024-12-10 04:00:36.879227] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:42.733 [2024-12-10 04:00:36.879236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:42.733 [2024-12-10 04:00:36.879260] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:12:42.733 [2024-12-10 04:00:36.879274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:42.733 [2024-12-10 04:00:36.879309] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:42.733 [2024-12-10 04:00:36.879319] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:42.733 [2024-12-10 04:00:36.879326] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.733 [2024-12-10 04:00:36.879348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:42.733 [2024-12-10 04:00:36.879418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:42.733 [2024-12-10 04:00:36.879441] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:12:42.733 [2024-12-10 04:00:36.879450] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:12:42.733 [2024-12-10 04:00:36.879457] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:12:42.733 [2024-12-10 04:00:36.879469] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:42.733 [2024-12-10 04:00:36.879478] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:12:42.733 [2024-12-10 04:00:36.879486] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:12:42.733 [2024-12-10 04:00:36.879494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:12:42.733 [2024-12-10 04:00:36.879506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:42.733 [2024-12-10 04:00:36.879522] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:42.733 [2024-12-10 04:00:36.879559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:42.733 [2024-12-10 04:00:36.879579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.733 [2024-12-10 04:00:36.879607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.733 [2024-12-10 04:00:36.879621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.733 [2024-12-10 04:00:36.879633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.733 [2024-12-10 04:00:36.879642] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:42.733 [2024-12-10 04:00:36.879659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:42.733 [2024-12-10 04:00:36.879675] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:42.733 [2024-12-10 04:00:36.879687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:42.733 [2024-12-10 04:00:36.879699] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:12:42.733 [2024-12-10 04:00:36.879708] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:42.733 [2024-12-10 04:00:36.879719] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:12:42.733 [2024-12-10 04:00:36.879730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:42.733 [2024-12-10 04:00:36.879743] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:42.733 [2024-12-10 04:00:36.879763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:42.733 [2024-12-10 04:00:36.879849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:12:42.733 [2024-12-10 04:00:36.879868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:42.733 [2024-12-10 04:00:36.879883] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:42.733 [2024-12-10 04:00:36.879895] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:42.733 [2024-12-10 04:00:36.879916] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.733 [2024-12-10 04:00:36.879926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:42.733 [2024-12-10 04:00:36.879945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:42.733 [2024-12-10 04:00:36.879973] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:12:42.733 [2024-12-10 04:00:36.879991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:12:42.733 [2024-12-10 04:00:36.880007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:42.733 [2024-12-10 04:00:36.880019] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:42.733 [2024-12-10 04:00:36.880026] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:42.733 [2024-12-10 04:00:36.880032] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.733 [2024-12-10 04:00:36.880041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:42.733 [2024-12-10 04:00:36.880079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:42.733 [2024-12-10 04:00:36.880103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:42.734 [2024-12-10 04:00:36.880119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:42.734 [2024-12-10 04:00:36.880132] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:42.734 [2024-12-10 04:00:36.880140] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:42.734 [2024-12-10 04:00:36.880145] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.734 [2024-12-10 04:00:36.880154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:42.734 [2024-12-10 04:00:36.880168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:42.734 [2024-12-10 04:00:36.880182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:42.734 [2024-12-10 04:00:36.880194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:42.734 [2024-12-10 04:00:36.880208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:12:42.734 [2024-12-10 04:00:36.880222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:42.734 [2024-12-10 04:00:36.880231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:42.734 [2024-12-10 04:00:36.880240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:12:42.734 [2024-12-10 04:00:36.880248] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:42.734 [2024-12-10 04:00:36.880259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:12:42.734 [2024-12-10 04:00:36.880268] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:12:42.734 [2024-12-10 04:00:36.880297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:42.734 [2024-12-10 04:00:36.880315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:42.734 [2024-12-10 04:00:36.880334] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:42.734 [2024-12-10 04:00:36.880346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:42.734 [2024-12-10 04:00:36.880362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:42.734 [2024-12-10 04:00:36.880377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:42.734 [2024-12-10 04:00:36.880393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:42.734 [2024-12-10 04:00:36.880404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:42.734 [2024-12-10 04:00:36.880427] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:42.734 [2024-12-10 04:00:36.880437] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:42.734 [2024-12-10 04:00:36.880443] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:42.734 [2024-12-10 04:00:36.880448] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:42.734 [2024-12-10 04:00:36.880454] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:42.734 [2024-12-10 04:00:36.880463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:42.734 [2024-12-10 04:00:36.880474] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:42.734 [2024-12-10 04:00:36.880482] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:42.734 [2024-12-10 04:00:36.880488] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.734 [2024-12-10 04:00:36.880496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:42.734 [2024-12-10 04:00:36.880507] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:42.734 [2024-12-10 04:00:36.880515] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:42.734 [2024-12-10 04:00:36.880535] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.734 [2024-12-10 04:00:36.880553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:42.734 [2024-12-10 04:00:36.880568] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:42.734 [2024-12-10 04:00:36.880577] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:42.734 [2024-12-10 04:00:36.880582] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.734 [2024-12-10 04:00:36.880607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:42.734 [2024-12-10 04:00:36.880626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:42.734 [2024-12-10 04:00:36.880648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:42.734 [2024-12-10 04:00:36.880666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:42.734 [2024-12-10 04:00:36.880679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:42.734 ===================================================== 00:12:42.734 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:42.734 ===================================================== 00:12:42.734 Controller Capabilities/Features 00:12:42.734 ================================ 00:12:42.734 Vendor ID: 4e58 00:12:42.734 Subsystem Vendor ID: 4e58 00:12:42.734 Serial Number: SPDK1 00:12:42.734 Model Number: SPDK bdev Controller 00:12:42.734 Firmware Version: 25.01 00:12:42.734 Recommended Arb Burst: 6 00:12:42.734 IEEE OUI Identifier: 8d 6b 50 00:12:42.734 Multi-path I/O 00:12:42.734 May have multiple subsystem ports: Yes 00:12:42.734 May have multiple controllers: Yes 00:12:42.734 Associated with SR-IOV VF: No 00:12:42.734 Max Data Transfer Size: 131072 00:12:42.734 Max Number of Namespaces: 32 00:12:42.734 Max Number of I/O Queues: 127 00:12:42.734 NVMe Specification Version (VS): 1.3 00:12:42.734 NVMe Specification Version (Identify): 1.3 00:12:42.734 Maximum Queue Entries: 256 00:12:42.734 Contiguous Queues Required: Yes 00:12:42.734 Arbitration Mechanisms Supported 00:12:42.734 Weighted Round Robin: Not Supported 00:12:42.734 Vendor Specific: Not Supported 00:12:42.734 Reset Timeout: 15000 ms 00:12:42.734 Doorbell Stride: 4 bytes 00:12:42.734 NVM Subsystem Reset: Not Supported 00:12:42.734 Command Sets Supported 00:12:42.734 NVM Command Set: Supported 00:12:42.734 Boot Partition: Not Supported 00:12:42.734 Memory Page Size Minimum: 4096 bytes 00:12:42.734 Memory Page Size Maximum: 4096 bytes 00:12:42.734 Persistent Memory Region: Not Supported 00:12:42.734 Optional Asynchronous Events Supported 00:12:42.734 Namespace Attribute Notices: Supported 00:12:42.734 Firmware Activation Notices: Not Supported 00:12:42.734 ANA Change Notices: Not Supported 00:12:42.734 PLE Aggregate Log Change Notices: Not Supported 00:12:42.734 LBA Status Info Alert Notices: Not Supported 00:12:42.734 EGE Aggregate Log Change Notices: Not Supported 00:12:42.734 Normal NVM Subsystem Shutdown event: Not Supported 00:12:42.734 Zone Descriptor Change Notices: Not Supported 00:12:42.734 Discovery Log Change Notices: Not Supported 00:12:42.734 Controller Attributes 00:12:42.734 128-bit Host Identifier: Supported 00:12:42.734 Non-Operational Permissive Mode: Not Supported 00:12:42.734 NVM Sets: Not Supported 00:12:42.734 Read Recovery Levels: Not Supported 00:12:42.734 Endurance Groups: Not Supported 00:12:42.734 Predictable Latency Mode: Not Supported 00:12:42.734 Traffic Based Keep ALive: Not Supported 00:12:42.734 Namespace Granularity: Not Supported 00:12:42.734 SQ Associations: Not Supported 00:12:42.734 UUID List: Not Supported 00:12:42.734 Multi-Domain Subsystem: Not Supported 00:12:42.734 Fixed Capacity Management: Not Supported 00:12:42.734 Variable Capacity Management: Not Supported 00:12:42.734 Delete Endurance Group: Not Supported 00:12:42.734 Delete NVM Set: Not Supported 00:12:42.734 Extended LBA Formats Supported: Not Supported 00:12:42.734 Flexible Data Placement Supported: Not Supported 00:12:42.734 00:12:42.734 Controller Memory Buffer Support 00:12:42.734 ================================ 00:12:42.734 Supported: No 00:12:42.734 00:12:42.734 Persistent Memory Region Support 00:12:42.734 ================================ 00:12:42.734 Supported: No 00:12:42.734 00:12:42.734 Admin Command Set Attributes 00:12:42.734 ============================ 00:12:42.734 Security Send/Receive: Not Supported 00:12:42.734 Format NVM: Not Supported 00:12:42.734 Firmware Activate/Download: Not Supported 00:12:42.734 Namespace Management: Not Supported 00:12:42.734 Device Self-Test: Not Supported 00:12:42.734 Directives: Not Supported 00:12:42.734 NVMe-MI: Not Supported 00:12:42.734 Virtualization Management: Not Supported 00:12:42.734 Doorbell Buffer Config: Not Supported 00:12:42.734 Get LBA Status Capability: Not Supported 00:12:42.734 Command & Feature Lockdown Capability: Not Supported 00:12:42.734 Abort Command Limit: 4 00:12:42.734 Async Event Request Limit: 4 00:12:42.734 Number of Firmware Slots: N/A 00:12:42.734 Firmware Slot 1 Read-Only: N/A 00:12:42.734 Firmware Activation Without Reset: N/A 00:12:42.734 Multiple Update Detection Support: N/A 00:12:42.734 Firmware Update Granularity: No Information Provided 00:12:42.735 Per-Namespace SMART Log: No 00:12:42.735 Asymmetric Namespace Access Log Page: Not Supported 00:12:42.735 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:42.735 Command Effects Log Page: Supported 00:12:42.735 Get Log Page Extended Data: Supported 00:12:42.735 Telemetry Log Pages: Not Supported 00:12:42.735 Persistent Event Log Pages: Not Supported 00:12:42.735 Supported Log Pages Log Page: May Support 00:12:42.735 Commands Supported & Effects Log Page: Not Supported 00:12:42.735 Feature Identifiers & Effects Log Page:May Support 00:12:42.735 NVMe-MI Commands & Effects Log Page: May Support 00:12:42.735 Data Area 4 for Telemetry Log: Not Supported 00:12:42.735 Error Log Page Entries Supported: 128 00:12:42.735 Keep Alive: Supported 00:12:42.735 Keep Alive Granularity: 10000 ms 00:12:42.735 00:12:42.735 NVM Command Set Attributes 00:12:42.735 ========================== 00:12:42.735 Submission Queue Entry Size 00:12:42.735 Max: 64 00:12:42.735 Min: 64 00:12:42.735 Completion Queue Entry Size 00:12:42.735 Max: 16 00:12:42.735 Min: 16 00:12:42.735 Number of Namespaces: 32 00:12:42.735 Compare Command: Supported 00:12:42.735 Write Uncorrectable Command: Not Supported 00:12:42.735 Dataset Management Command: Supported 00:12:42.735 Write Zeroes Command: Supported 00:12:42.735 Set Features Save Field: Not Supported 00:12:42.735 Reservations: Not Supported 00:12:42.735 Timestamp: Not Supported 00:12:42.735 Copy: Supported 00:12:42.735 Volatile Write Cache: Present 00:12:42.735 Atomic Write Unit (Normal): 1 00:12:42.735 Atomic Write Unit (PFail): 1 00:12:42.735 Atomic Compare & Write Unit: 1 00:12:42.735 Fused Compare & Write: Supported 00:12:42.735 Scatter-Gather List 00:12:42.735 SGL Command Set: Supported (Dword aligned) 00:12:42.735 SGL Keyed: Not Supported 00:12:42.735 SGL Bit Bucket Descriptor: Not Supported 00:12:42.735 SGL Metadata Pointer: Not Supported 00:12:42.735 Oversized SGL: Not Supported 00:12:42.735 SGL Metadata Address: Not Supported 00:12:42.735 SGL Offset: Not Supported 00:12:42.735 Transport SGL Data Block: Not Supported 00:12:42.735 Replay Protected Memory Block: Not Supported 00:12:42.735 00:12:42.735 Firmware Slot Information 00:12:42.735 ========================= 00:12:42.735 Active slot: 1 00:12:42.735 Slot 1 Firmware Revision: 25.01 00:12:42.735 00:12:42.735 00:12:42.735 Commands Supported and Effects 00:12:42.735 ============================== 00:12:42.735 Admin Commands 00:12:42.735 -------------- 00:12:42.735 Get Log Page (02h): Supported 00:12:42.735 Identify (06h): Supported 00:12:42.735 Abort (08h): Supported 00:12:42.735 Set Features (09h): Supported 00:12:42.735 Get Features (0Ah): Supported 00:12:42.735 Asynchronous Event Request (0Ch): Supported 00:12:42.735 Keep Alive (18h): Supported 00:12:42.735 I/O Commands 00:12:42.735 ------------ 00:12:42.735 Flush (00h): Supported LBA-Change 00:12:42.735 Write (01h): Supported LBA-Change 00:12:42.735 Read (02h): Supported 00:12:42.735 Compare (05h): Supported 00:12:42.735 Write Zeroes (08h): Supported LBA-Change 00:12:42.735 Dataset Management (09h): Supported LBA-Change 00:12:42.735 Copy (19h): Supported LBA-Change 00:12:42.735 00:12:42.735 Error Log 00:12:42.735 ========= 00:12:42.735 00:12:42.735 Arbitration 00:12:42.735 =========== 00:12:42.735 Arbitration Burst: 1 00:12:42.735 00:12:42.735 Power Management 00:12:42.735 ================ 00:12:42.735 Number of Power States: 1 00:12:42.735 Current Power State: Power State #0 00:12:42.735 Power State #0: 00:12:42.735 Max Power: 0.00 W 00:12:42.735 Non-Operational State: Operational 00:12:42.735 Entry Latency: Not Reported 00:12:42.735 Exit Latency: Not Reported 00:12:42.735 Relative Read Throughput: 0 00:12:42.735 Relative Read Latency: 0 00:12:42.735 Relative Write Throughput: 0 00:12:42.735 Relative Write Latency: 0 00:12:42.735 Idle Power: Not Reported 00:12:42.735 Active Power: Not Reported 00:12:42.735 Non-Operational Permissive Mode: Not Supported 00:12:42.735 00:12:42.735 Health Information 00:12:42.735 ================== 00:12:42.735 Critical Warnings: 00:12:42.735 Available Spare Space: OK 00:12:42.735 Temperature: OK 00:12:42.735 Device Reliability: OK 00:12:42.735 Read Only: No 00:12:42.735 Volatile Memory Backup: OK 00:12:42.735 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:42.735 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:42.735 Available Spare: 0% 00:12:42.735 Available Sp[2024-12-10 04:00:36.880801] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:42.735 [2024-12-10 04:00:36.880818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:42.735 [2024-12-10 04:00:36.880879] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:12:42.735 [2024-12-10 04:00:36.880913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.735 [2024-12-10 04:00:36.880924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.735 [2024-12-10 04:00:36.880934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.735 [2024-12-10 04:00:36.880943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.735 [2024-12-10 04:00:36.883558] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:42.735 [2024-12-10 04:00:36.883582] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:42.735 [2024-12-10 04:00:36.884234] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:42.735 [2024-12-10 04:00:36.884324] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:12:42.735 [2024-12-10 04:00:36.884338] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:12:42.735 [2024-12-10 04:00:36.885256] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:42.735 [2024-12-10 04:00:36.885281] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:12:42.735 [2024-12-10 04:00:36.885340] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:42.735 [2024-12-10 04:00:36.888555] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:42.735 are Threshold: 0% 00:12:42.735 Life Percentage Used: 0% 00:12:42.735 Data Units Read: 0 00:12:42.735 Data Units Written: 0 00:12:42.735 Host Read Commands: 0 00:12:42.735 Host Write Commands: 0 00:12:42.735 Controller Busy Time: 0 minutes 00:12:42.735 Power Cycles: 0 00:12:42.735 Power On Hours: 0 hours 00:12:42.735 Unsafe Shutdowns: 0 00:12:42.735 Unrecoverable Media Errors: 0 00:12:42.735 Lifetime Error Log Entries: 0 00:12:42.735 Warning Temperature Time: 0 minutes 00:12:42.735 Critical Temperature Time: 0 minutes 00:12:42.735 00:12:42.735 Number of Queues 00:12:42.735 ================ 00:12:42.735 Number of I/O Submission Queues: 127 00:12:42.735 Number of I/O Completion Queues: 127 00:12:42.735 00:12:42.735 Active Namespaces 00:12:42.735 ================= 00:12:42.735 Namespace ID:1 00:12:42.735 Error Recovery Timeout: Unlimited 00:12:42.735 Command Set Identifier: NVM (00h) 00:12:42.735 Deallocate: Supported 00:12:42.735 Deallocated/Unwritten Error: Not Supported 00:12:42.735 Deallocated Read Value: Unknown 00:12:42.735 Deallocate in Write Zeroes: Not Supported 00:12:42.735 Deallocated Guard Field: 0xFFFF 00:12:42.735 Flush: Supported 00:12:42.735 Reservation: Supported 00:12:42.735 Namespace Sharing Capabilities: Multiple Controllers 00:12:42.735 Size (in LBAs): 131072 (0GiB) 00:12:42.735 Capacity (in LBAs): 131072 (0GiB) 00:12:42.735 Utilization (in LBAs): 131072 (0GiB) 00:12:42.735 NGUID: 3383A137C9E94F418F0853492C165810 00:12:42.735 UUID: 3383a137-c9e9-4f41-8f08-53492c165810 00:12:42.735 Thin Provisioning: Not Supported 00:12:42.735 Per-NS Atomic Units: Yes 00:12:42.735 Atomic Boundary Size (Normal): 0 00:12:42.735 Atomic Boundary Size (PFail): 0 00:12:42.735 Atomic Boundary Offset: 0 00:12:42.735 Maximum Single Source Range Length: 65535 00:12:42.735 Maximum Copy Length: 65535 00:12:42.735 Maximum Source Range Count: 1 00:12:42.735 NGUID/EUI64 Never Reused: No 00:12:42.735 Namespace Write Protected: No 00:12:42.735 Number of LBA Formats: 1 00:12:42.735 Current LBA Format: LBA Format #00 00:12:42.735 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:42.735 00:12:42.735 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:42.994 [2024-12-10 04:00:37.139437] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:48.259 Initializing NVMe Controllers 00:12:48.259 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.259 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:48.259 Initialization complete. Launching workers. 00:12:48.259 ======================================================== 00:12:48.259 Latency(us) 00:12:48.259 Device Information : IOPS MiB/s Average min max 00:12:48.259 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 30700.60 119.92 4170.88 1210.80 7774.30 00:12:48.259 ======================================================== 00:12:48.259 Total : 30700.60 119.92 4170.88 1210.80 7774.30 00:12:48.259 00:12:48.259 [2024-12-10 04:00:42.165575] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.259 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:48.259 [2024-12-10 04:00:42.416802] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:53.557 Initializing NVMe Controllers 00:12:53.557 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:53.557 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:53.557 Initialization complete. Launching workers. 00:12:53.557 ======================================================== 00:12:53.557 Latency(us) 00:12:53.557 Device Information : IOPS MiB/s Average min max 00:12:53.557 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16038.13 62.65 7986.12 6967.01 11967.95 00:12:53.557 ======================================================== 00:12:53.557 Total : 16038.13 62.65 7986.12 6967.01 11967.95 00:12:53.557 00:12:53.557 [2024-12-10 04:00:47.452427] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:53.557 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:53.557 [2024-12-10 04:00:47.679562] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:58.822 [2024-12-10 04:00:52.738878] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:58.822 Initializing NVMe Controllers 00:12:58.822 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:58.822 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:58.822 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:58.822 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:58.822 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:58.822 Initialization complete. Launching workers. 00:12:58.822 Starting thread on core 2 00:12:58.822 Starting thread on core 3 00:12:58.822 Starting thread on core 1 00:12:58.822 04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:58.822 [2024-12-10 04:00:53.075069] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:02.105 [2024-12-10 04:00:56.138935] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:02.105 Initializing NVMe Controllers 00:13:02.105 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.106 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.106 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:02.106 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:02.106 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:02.106 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:02.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:02.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:02.106 Initialization complete. Launching workers. 00:13:02.106 Starting thread on core 1 with urgent priority queue 00:13:02.106 Starting thread on core 2 with urgent priority queue 00:13:02.106 Starting thread on core 3 with urgent priority queue 00:13:02.106 Starting thread on core 0 with urgent priority queue 00:13:02.106 SPDK bdev Controller (SPDK1 ) core 0: 4961.00 IO/s 20.16 secs/100000 ios 00:13:02.106 SPDK bdev Controller (SPDK1 ) core 1: 4754.00 IO/s 21.03 secs/100000 ios 00:13:02.106 SPDK bdev Controller (SPDK1 ) core 2: 5212.00 IO/s 19.19 secs/100000 ios 00:13:02.106 SPDK bdev Controller (SPDK1 ) core 3: 5154.33 IO/s 19.40 secs/100000 ios 00:13:02.106 ======================================================== 00:13:02.106 00:13:02.106 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:02.106 [2024-12-10 04:00:56.464088] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:02.364 Initializing NVMe Controllers 00:13:02.364 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.364 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.364 Namespace ID: 1 size: 0GB 00:13:02.364 Initialization complete. 00:13:02.364 INFO: using host memory buffer for IO 00:13:02.364 Hello world! 00:13:02.364 [2024-12-10 04:00:56.498619] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:02.364 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:02.622 [2024-12-10 04:00:56.802924] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:03.557 Initializing NVMe Controllers 00:13:03.557 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:03.557 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:03.557 Initialization complete. Launching workers. 00:13:03.557 submit (in ns) avg, min, max = 7620.9, 3525.6, 4019278.9 00:13:03.557 complete (in ns) avg, min, max = 26990.0, 2068.9, 4996745.6 00:13:03.557 00:13:03.557 Submit histogram 00:13:03.557 ================ 00:13:03.557 Range in us Cumulative Count 00:13:03.557 3.508 - 3.532: 0.0163% ( 2) 00:13:03.557 3.532 - 3.556: 0.0980% ( 10) 00:13:03.557 3.556 - 3.579: 0.5715% ( 58) 00:13:03.557 3.579 - 3.603: 2.4900% ( 235) 00:13:03.557 3.603 - 3.627: 5.4862% ( 367) 00:13:03.557 3.627 - 3.650: 13.0133% ( 922) 00:13:03.557 3.650 - 3.674: 20.6629% ( 937) 00:13:03.557 3.674 - 3.698: 30.4596% ( 1200) 00:13:03.557 3.698 - 3.721: 38.5909% ( 996) 00:13:03.557 3.721 - 3.745: 45.6364% ( 863) 00:13:03.557 3.745 - 3.769: 50.9919% ( 656) 00:13:03.557 3.769 - 3.793: 56.3148% ( 652) 00:13:03.557 3.793 - 3.816: 60.1927% ( 475) 00:13:03.557 3.816 - 3.840: 63.4991% ( 405) 00:13:03.557 3.840 - 3.864: 66.8463% ( 410) 00:13:03.557 3.864 - 3.887: 70.6098% ( 461) 00:13:03.557 3.887 - 3.911: 74.5449% ( 482) 00:13:03.557 3.911 - 3.935: 78.8799% ( 531) 00:13:03.557 3.935 - 3.959: 81.8353% ( 362) 00:13:03.557 3.959 - 3.982: 84.5865% ( 337) 00:13:03.557 3.982 - 4.006: 87.1010% ( 308) 00:13:03.557 4.006 - 4.030: 88.6521% ( 190) 00:13:03.557 4.030 - 4.053: 89.7543% ( 135) 00:13:03.557 4.053 - 4.077: 90.9135% ( 142) 00:13:03.557 4.077 - 4.101: 91.7054% ( 97) 00:13:03.557 4.101 - 4.124: 92.3912% ( 84) 00:13:03.557 4.124 - 4.148: 93.0117% ( 76) 00:13:03.557 4.148 - 4.172: 93.6321% ( 76) 00:13:03.557 4.172 - 4.196: 93.9587% ( 40) 00:13:03.557 4.196 - 4.219: 94.4077% ( 55) 00:13:03.557 4.219 - 4.243: 94.7261% ( 39) 00:13:03.557 4.243 - 4.267: 94.9547% ( 28) 00:13:03.557 4.267 - 4.290: 95.3057% ( 43) 00:13:03.557 4.290 - 4.314: 95.5180% ( 26) 00:13:03.557 4.314 - 4.338: 95.7466% ( 28) 00:13:03.557 4.338 - 4.361: 95.8609% ( 14) 00:13:03.557 4.361 - 4.385: 95.9670% ( 13) 00:13:03.557 4.385 - 4.409: 96.0405% ( 9) 00:13:03.557 4.409 - 4.433: 96.1140% ( 9) 00:13:03.557 4.433 - 4.456: 96.2038% ( 11) 00:13:03.557 4.456 - 4.480: 96.3344% ( 16) 00:13:03.557 4.480 - 4.504: 96.4650% ( 16) 00:13:03.557 4.504 - 4.527: 96.5956% ( 16) 00:13:03.557 4.527 - 4.551: 96.6446% ( 6) 00:13:03.557 4.551 - 4.575: 96.7018% ( 7) 00:13:03.557 4.575 - 4.599: 96.7508% ( 6) 00:13:03.557 4.599 - 4.622: 96.8079% ( 7) 00:13:03.557 4.622 - 4.646: 96.8732% ( 8) 00:13:03.557 4.646 - 4.670: 96.9059% ( 4) 00:13:03.557 4.670 - 4.693: 96.9385% ( 4) 00:13:03.557 4.717 - 4.741: 96.9630% ( 3) 00:13:03.557 4.741 - 4.764: 97.0038% ( 5) 00:13:03.557 4.764 - 4.788: 97.0120% ( 1) 00:13:03.557 4.788 - 4.812: 97.0365% ( 3) 00:13:03.557 4.812 - 4.836: 97.0447% ( 1) 00:13:03.557 4.836 - 4.859: 97.1181% ( 9) 00:13:03.557 4.859 - 4.883: 97.1753% ( 7) 00:13:03.557 4.883 - 4.907: 97.2406% ( 8) 00:13:03.557 4.907 - 4.930: 97.2488% ( 1) 00:13:03.557 4.930 - 4.954: 97.2896% ( 5) 00:13:03.557 4.954 - 4.978: 97.3875% ( 12) 00:13:03.557 4.978 - 5.001: 97.4447% ( 7) 00:13:03.557 5.001 - 5.025: 97.4610% ( 2) 00:13:03.557 5.025 - 5.049: 97.5182% ( 7) 00:13:03.557 5.049 - 5.073: 97.5590% ( 5) 00:13:03.557 5.073 - 5.096: 97.5753% ( 2) 00:13:03.557 5.096 - 5.120: 97.6161% ( 5) 00:13:03.557 5.120 - 5.144: 97.6406% ( 3) 00:13:03.557 5.144 - 5.167: 97.6570% ( 2) 00:13:03.557 5.167 - 5.191: 97.6978% ( 5) 00:13:03.557 5.191 - 5.215: 97.7468% ( 6) 00:13:03.557 5.215 - 5.239: 97.7957% ( 6) 00:13:03.557 5.239 - 5.262: 97.8121% ( 2) 00:13:03.557 5.262 - 5.286: 97.8447% ( 4) 00:13:03.557 5.310 - 5.333: 97.8610% ( 2) 00:13:03.557 5.333 - 5.357: 97.8774% ( 2) 00:13:03.557 5.357 - 5.381: 97.9100% ( 4) 00:13:03.557 5.381 - 5.404: 97.9345% ( 3) 00:13:03.557 5.404 - 5.428: 97.9427% ( 1) 00:13:03.557 5.428 - 5.452: 97.9753% ( 4) 00:13:03.557 5.452 - 5.476: 97.9917% ( 2) 00:13:03.557 5.476 - 5.499: 97.9998% ( 1) 00:13:03.557 5.499 - 5.523: 98.0080% ( 1) 00:13:03.557 5.523 - 5.547: 98.0243% ( 2) 00:13:03.557 5.570 - 5.594: 98.0325% ( 1) 00:13:03.557 5.618 - 5.641: 98.0407% ( 1) 00:13:03.557 5.641 - 5.665: 98.0488% ( 1) 00:13:03.557 5.689 - 5.713: 98.0570% ( 1) 00:13:03.557 5.760 - 5.784: 98.0651% ( 1) 00:13:03.557 5.807 - 5.831: 98.0733% ( 1) 00:13:03.557 5.926 - 5.950: 98.0896% ( 2) 00:13:03.557 6.021 - 6.044: 98.0978% ( 1) 00:13:03.557 6.044 - 6.068: 98.1060% ( 1) 00:13:03.557 6.068 - 6.116: 98.1223% ( 2) 00:13:03.557 6.116 - 6.163: 98.1305% ( 1) 00:13:03.557 6.163 - 6.210: 98.1386% ( 1) 00:13:03.557 6.258 - 6.305: 98.1631% ( 3) 00:13:03.557 6.353 - 6.400: 98.1794% ( 2) 00:13:03.557 6.447 - 6.495: 98.1958% ( 2) 00:13:03.557 6.590 - 6.637: 98.2039% ( 1) 00:13:03.557 6.637 - 6.684: 98.2121% ( 1) 00:13:03.557 6.827 - 6.874: 98.2203% ( 1) 00:13:03.557 7.538 - 7.585: 98.2284% ( 1) 00:13:03.557 7.585 - 7.633: 98.2448% ( 2) 00:13:03.557 7.680 - 7.727: 98.2611% ( 2) 00:13:03.557 7.775 - 7.822: 98.2774% ( 2) 00:13:03.557 7.822 - 7.870: 98.2937% ( 2) 00:13:03.557 7.964 - 8.012: 98.3019% ( 1) 00:13:03.557 8.059 - 8.107: 98.3101% ( 1) 00:13:03.557 8.107 - 8.154: 98.3427% ( 4) 00:13:03.557 8.249 - 8.296: 98.3590% ( 2) 00:13:03.557 8.391 - 8.439: 98.3672% ( 1) 00:13:03.557 8.439 - 8.486: 98.3754% ( 1) 00:13:03.557 8.533 - 8.581: 98.3917% ( 2) 00:13:03.557 8.581 - 8.628: 98.4080% ( 2) 00:13:03.557 8.628 - 8.676: 98.4162% ( 1) 00:13:03.557 8.723 - 8.770: 98.4244% ( 1) 00:13:03.557 8.770 - 8.818: 98.4570% ( 4) 00:13:03.557 8.913 - 8.960: 98.4652% ( 1) 00:13:03.557 9.007 - 9.055: 98.4815% ( 2) 00:13:03.557 9.055 - 9.102: 98.4897% ( 1) 00:13:03.557 9.150 - 9.197: 98.5060% ( 2) 00:13:03.557 9.244 - 9.292: 98.5142% ( 1) 00:13:03.557 9.576 - 9.624: 98.5223% ( 1) 00:13:03.557 9.719 - 9.766: 98.5305% ( 1) 00:13:03.557 9.908 - 9.956: 98.5550% ( 3) 00:13:03.557 10.382 - 10.430: 98.5713% ( 2) 00:13:03.557 10.477 - 10.524: 98.5795% ( 1) 00:13:03.557 10.572 - 10.619: 98.5958% ( 2) 00:13:03.557 10.999 - 11.046: 98.6040% ( 1) 00:13:03.557 11.283 - 11.330: 98.6121% ( 1) 00:13:03.557 11.425 - 11.473: 98.6203% ( 1) 00:13:03.557 11.473 - 11.520: 98.6285% ( 1) 00:13:03.557 11.615 - 11.662: 98.6366% ( 1) 00:13:03.557 11.899 - 11.947: 98.6448% ( 1) 00:13:03.557 11.947 - 11.994: 98.6530% ( 1) 00:13:03.557 12.089 - 12.136: 98.6693% ( 2) 00:13:03.558 12.326 - 12.421: 98.6774% ( 1) 00:13:03.558 12.516 - 12.610: 98.6856% ( 1) 00:13:03.558 12.610 - 12.705: 98.6938% ( 1) 00:13:03.558 12.895 - 12.990: 98.7019% ( 1) 00:13:03.558 12.990 - 13.084: 98.7101% ( 1) 00:13:03.558 13.274 - 13.369: 98.7183% ( 1) 00:13:03.558 13.464 - 13.559: 98.7428% ( 3) 00:13:03.558 13.559 - 13.653: 98.7591% ( 2) 00:13:03.558 13.843 - 13.938: 98.7754% ( 2) 00:13:03.558 14.033 - 14.127: 98.7836% ( 1) 00:13:03.558 14.412 - 14.507: 98.7917% ( 1) 00:13:03.558 15.076 - 15.170: 98.7999% ( 1) 00:13:03.558 17.067 - 17.161: 98.8162% ( 2) 00:13:03.558 17.161 - 17.256: 98.8326% ( 2) 00:13:03.558 17.256 - 17.351: 98.8407% ( 1) 00:13:03.558 17.351 - 17.446: 98.8652% ( 3) 00:13:03.558 17.446 - 17.541: 98.8734% ( 1) 00:13:03.558 17.541 - 17.636: 98.9142% ( 5) 00:13:03.558 17.636 - 17.730: 98.9550% ( 5) 00:13:03.558 17.730 - 17.825: 98.9795% ( 3) 00:13:03.558 17.825 - 17.920: 99.0122% ( 4) 00:13:03.558 17.920 - 18.015: 99.0938% ( 10) 00:13:03.558 18.015 - 18.110: 99.1591% ( 8) 00:13:03.558 18.110 - 18.204: 99.2081% ( 6) 00:13:03.558 18.204 - 18.299: 99.3387% ( 16) 00:13:03.558 18.299 - 18.394: 99.4204% ( 10) 00:13:03.558 18.394 - 18.489: 99.4693% ( 6) 00:13:03.558 18.489 - 18.584: 99.5510% ( 10) 00:13:03.558 18.584 - 18.679: 99.5918% ( 5) 00:13:03.558 18.679 - 18.773: 99.6490% ( 7) 00:13:03.558 18.773 - 18.868: 99.6816% ( 4) 00:13:03.558 18.868 - 18.963: 99.7306% ( 6) 00:13:03.558 18.963 - 19.058: 99.7632% ( 4) 00:13:03.558 19.058 - 19.153: 99.7714% ( 1) 00:13:03.558 19.153 - 19.247: 99.7796% ( 1) 00:13:03.558 19.437 - 19.532: 99.7877% ( 1) 00:13:03.558 19.627 - 19.721: 99.7959% ( 1) 00:13:03.558 19.721 - 19.816: 99.8041% ( 1) 00:13:03.558 19.816 - 19.911: 99.8122% ( 1) 00:13:03.558 19.911 - 20.006: 99.8286% ( 2) 00:13:03.558 21.144 - 21.239: 99.8367% ( 1) 00:13:03.558 21.239 - 21.333: 99.8530% ( 2) 00:13:03.558 21.997 - 22.092: 99.8612% ( 1) 00:13:03.558 23.514 - 23.609: 99.8694% ( 1) 00:13:03.558 24.083 - 24.178: 99.8775% ( 1) 00:13:03.558 25.979 - 26.169: 99.8857% ( 1) 00:13:03.558 28.444 - 28.634: 99.8939% ( 1) 00:13:03.558 32.427 - 32.616: 99.9020% ( 1) 00:13:03.558 33.564 - 33.754: 99.9102% ( 1) 00:13:03.558 3980.705 - 4004.978: 99.9673% ( 7) 00:13:03.558 4004.978 - 4029.250: 100.0000% ( 4) 00:13:03.558 00:13:03.558 Complete histogram 00:13:03.558 ================== 00:13:03.558 Range in us Cumulative Count 00:13:03.558 2.062 - 2.074: 1.0776% ( 132) 00:13:03.558 2.074 - 2.086: 30.8842% ( 3651) 00:13:03.558 2.086 - 2.098: 39.9543% ( 1111) 00:13:03.558 2.098 - 2.110: 44.2893% ( 531) 00:13:03.558 2.110 - 2.121: 57.8986% ( 1667) 00:13:03.558 2.121 - 2.133: 60.1518% ( 276) 00:13:03.558 2.133 - 2.145: 64.3073% ( 509) 00:13:03.558 2.145 - 2.157: 73.3856% ( 1112) 00:13:03.558 2.157 - 2.169: 75.1490% ( 216) 00:13:03.558 2.169 - 2.181: 77.9819% ( 347) 00:13:03.558 2.181 - 2.193: 81.5250% ( 434) 00:13:03.558 2.193 - 2.204: 82.1781% ( 80) 00:13:03.558 2.204 - 2.216: 83.4599% ( 157) 00:13:03.558 2.216 - 2.228: 87.0030% ( 434) 00:13:03.558 2.228 - 2.240: 89.1175% ( 259) 00:13:03.558 2.240 - 2.252: 90.6605% ( 189) 00:13:03.558 2.252 - 2.264: 92.2688% ( 197) 00:13:03.558 2.264 - 2.276: 92.5953% ( 40) 00:13:03.558 2.276 - 2.287: 92.7994% ( 25) 00:13:03.558 2.287 - 2.299: 93.2321% ( 53) 00:13:03.558 2.299 - 2.311: 93.9260% ( 85) 00:13:03.558 2.311 - 2.323: 94.2934% ( 45) 00:13:03.558 2.323 - 2.335: 94.4159% ( 15) 00:13:03.558 2.335 - 2.347: 94.4649% ( 6) 00:13:03.558 2.347 - 2.359: 94.5220% ( 7) 00:13:03.558 2.359 - 2.370: 94.5791% ( 7) 00:13:03.558 2.370 - 2.382: 94.7016% ( 15) 00:13:03.558 2.382 - 2.394: 94.8486% ( 18) 00:13:03.558 2.394 - 2.406: 95.1261% ( 34) 00:13:03.558 2.406 - 2.418: 95.2159% ( 11) 00:13:03.558 2.418 - 2.430: 95.3547% ( 17) 00:13:03.558 2.430 - 2.441: 95.5507% ( 24) 00:13:03.558 2.441 - 2.453: 95.7466% ( 24) 00:13:03.558 2.453 - 2.465: 95.9997% ( 31) 00:13:03.558 2.465 - 2.477: 96.1956% ( 24) 00:13:03.558 2.477 - 2.489: 96.3834% ( 23) 00:13:03.558 2.489 - 2.501: 96.5630% ( 22) 00:13:03.558 2.501 - 2.513: 96.7671% ( 25) 00:13:03.558 2.513 - 2.524: 96.9793% ( 26) 00:13:03.558 2.524 - 2.536: 97.1345% ( 19) 00:13:03.558 2.536 - 2.548: 97.2651% ( 16) 00:13:03.558 2.548 - 2.560: 97.3549% ( 11) 00:13:03.558 2.560 - 2.572: 97.4692% ( 14) 00:13:03.558 2.572 - 2.584: 97.5835% ( 14) 00:13:03.558 2.584 - 2.596: 97.6570% ( 9) 00:13:03.558 2.596 - 2.607: 97.7304% ( 9) 00:13:03.558 2.607 - 2.619: 97.7712% ( 5) 00:13:03.558 2.619 - 2.631: 97.8039% ( 4) 00:13:03.558 2.631 - 2.643: 97.8284% ( 3) 00:13:03.558 2.643 - 2.655: 97.8366% ( 1) 00:13:03.558 2.655 - 2.667: 97.8447% ( 1) 00:13:03.558 2.667 - 2.679: 97.8610% ( 2) 00:13:03.558 2.679 - 2.690: 97.8774% ( 2) 00:13:03.558 2.690 - 2.702: 97.9019% ( 3) 00:13:03.558 2.702 - 2.714: 97.9182% ( 2) 00:13:03.558 2.714 - 2.726: 97.9345% ( 2) 00:13:03.558 2.738 - 2.750: 97.9427% ( 1) 00:13:03.558 2.750 - 2.761: 97.9590% ( 2) 00:13:03.558 2.761 - 2.773: 97.9672% ( 1) 00:13:03.558 2.773 - 2.785: 97.9835% ( 2) 00:13:03.558 2.797 - 2.809: 97.9917% ( 1) 00:13:03.558 2.809 - 2.821: 97.9998% ( 1) 00:13:03.558 2.821 - 2.833: 98.0162% ( 2) 00:13:03.558 2.833 - 2.844: 98.0325% ( 2) 00:13:03.558 2.856 - 2.868: 98.0407% ( 1) 00:13:03.558 2.892 - 2.904: 98.0488% ( 1) 00:13:03.558 2.904 - 2.916: 98.0570% ( 1) 00:13:03.558 2.999 - 3.010: 98.0651% ( 1) 00:13:03.558 3.010 - 3.022: 98.0733% ( 1) 00:13:03.558 3.022 - 3.034: 98.0815% ( 1) 00:13:03.558 3.081 - 3.105: 98.0896% ( 1) 00:13:03.558 3.105 - 3.129: 98.1060% ( 2) 00:13:03.558 3.129 - 3.153: 98.1305% ( 3) 00:13:03.558 3.153 - 3.176: 98.1468% ( 2) 00:13:03.558 3.224 - 3.247: 98.1713% ( 3) 00:13:03.558 3.247 - 3.271: 98.1794% ( 1) 00:13:03.558 3.271 - 3.295: 98.1958% ( 2) 00:13:03.558 3.295 - 3.319: 98.2039% ( 1) 00:13:03.558 3.342 - 3.366: 98.2203% ( 2) 00:13:03.558 3.366 - 3.390: 98.2529% ( 4) 00:13:03.558 3.390 - 3.413: 98.2692% ( 2) 00:13:03.558 3.413 - 3.437: 98.2774% ( 1) 00:13:03.558 3.437 - 3.461: 98.2856% ( 1) 00:13:03.558 3.461 - 3.484: 98.2937% ( 1) 00:13:03.558 3.484 - 3.508: 98.3427% ( 6) 00:13:03.558 3.508 - 3.532: 98.3509% ( 1) 00:13:03.558 3.532 - 3.556: 98.3754% ( 3) 00:13:03.558 3.556 - 3.579: 98.3835% ( 1) 00:13:03.558 3.579 - 3.603: 98.3917% ( 1) 00:13:03.558 3.603 - 3.627: 98.4080% ( 2) 00:13:03.558 3.627 - 3.650: 98.4162% ( 1) 00:13:03.558 3.674 - 3.698: 98.4407% ( 3) 00:13:03.558 3.721 - 3.745: 98.4570% ( 2) 00:13:03.558 3.793 - 3.816: 98.4733% ( 2) 00:13:03.558 3.864 - 3.887: 98.4897% ( 2) 00:13:03.558 3.959 - 3.982: 98.4978% ( 1) 00:13:03.558 3.982 - 4.006: 98.5060% ( 1) 00:13:03.558 4.053 - 4.077: 98.5142% ( 1) 00:13:03.558 4.077 - 4.101: 98.5223% ( 1) 00:13:03.558 4.148 - 4.172: 98.5305% ( 1) 00:13:03.558 4.527 - 4.551: 98.5387% ( 1) 00:13:03.558 5.689 - 5.713: 98.5468% ( 1) 00:13:03.558 5.760 - 5.784: 98.5550% ( 1) 00:13:03.558 6.021 - 6.044: 98.5631% ( 1) 00:13:03.558 6.044 - 6.068: 98.5713% ( 1) 00:13:03.558 6.116 - 6.163: 98.5795% ( 1) 00:13:03.558 6.353 - 6.400: 98.5876% ( 1) 00:13:03.558 6.542 - 6.590: 98.5958% ( 1) 00:13:03.558 6.732 - 6.779: 98.6121% ( 2) 00:13:03.558 6.779 - 6.827: 98.6203% ( 1) 00:13:03.558 6.874 - 6.921: 98.6285% ( 1) 00:13:03.558 7.016 - 7.064: 98.6366% ( 1) 00:13:03.558 7.064 - 7.111: 98.6448% ( 1) 00:13:03.558 7.396 - 7.443: 98.6530% ( 1) 00:13:03.558 7.538 - 7.585: 98.6611% ( 1) 00:13:03.558 7.870 - 7.917: 98.6693% ( 1) 00:13:03.558 8.249 - 8.296: 98.6774% ( 1) 00:13:03.558 8.296 - 8.344: 98.6856% ( 1) 00:13:03.558 8.818 - 8.865: 98.6938% ( 1) 00:13:03.558 9.055 - 9.102: 98.7019% ( 1) 00:13:03.558 15.550 - 15.644: 98.7101% ( 1) 00:13:03.558 15.644 - 15.739: 98.7183% ( 1) 00:13:03.558 15.739 - 15.834: 98.7264% ( 1) 00:13:03.558 15.929 - 16.024: 98.7591% ( 4) 00:13:03.558 16.024 - 16.119: 98.8081% ( 6) 00:13:03.558 16.119 - 16.213: 98.8244% ( 2) 00:13:03.558 16.213 - 16.308: 98.8326% ( 1) 00:13:03.558 16.308 - 16.403: 98.8407% ( 1) 00:13:03.558 16.403 - 16.498: 98.9060% ( 8) 00:13:03.558 16.498 - 16.593: 98.9713% ( 8) 00:13:03.558 16.593 - 16.687: 99.0285% ( 7) 00:13:03.558 16.687 - 16.782: 99.0856% ( 7) 00:13:03.558 16.782 - 16.877: 99.1346% ( 6) 00:13:03.558 16.877 - 16.972: 99.1754% ( 5) 00:13:03.558 16.972 - 17.067: 99.1836% ( 1) 00:13:03.558 17.067 - 17.161: 99.2163% ( 4) 00:13:03.558 17.161 - 17.256: 99.2326% ( 2) 00:13:03.558 17.256 - 17.351: 99.2408% ( 1) 00:13:03.558 17.351 - 17.446: 99.2489% ( 1) 00:13:03.558 17.446 - 17.541: 99.2571% ( 1) 00:13:03.558 17.541 - 17.636: 99.2734% ( 2) 00:13:03.559 17.636 - 17.730: 99.2897% ( 2) 00:13:03.559 17.730 - 17.825: 99.2979% ( 1) 00:13:03.559 17.825 - 17.920: 99.3061% ( 1) 00:13:03.559 18.110 - 18.204: 99.3142% ( 1) 00:13:03.559 18.299 - 18.394: 99.3306% ( 2) 00:13:03.559 18.394 - 18.489: 99.3387% ( 1) 00:13:03.559 18.489 - 18.584: 99.3469% ( 1) 00:13:03.559 18.584 - 18.679: 99.3632% ( 2) 00:13:03.559 18.963 - 19.058: 99.3714% ( 1) 00:13:03.559 110.744 - 111.502: 99.3795%[2024-12-10 04:00:57.826158] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:03.559 ( 1) 00:13:03.559 2572.895 - 2585.031: 99.3877% ( 1) 00:13:03.559 3009.801 - 3021.938: 99.3959% ( 1) 00:13:03.559 3980.705 - 4004.978: 99.7306% ( 41) 00:13:03.559 4004.978 - 4029.250: 99.9918% ( 32) 00:13:03.559 4975.881 - 5000.154: 100.0000% ( 1) 00:13:03.559 00:13:03.559 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:03.559 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:03.559 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:03.559 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:03.559 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:03.817 [ 00:13:03.817 { 00:13:03.817 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:03.817 "subtype": "Discovery", 00:13:03.817 "listen_addresses": [], 00:13:03.817 "allow_any_host": true, 00:13:03.817 "hosts": [] 00:13:03.817 }, 00:13:03.817 { 00:13:03.817 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:03.817 "subtype": "NVMe", 00:13:03.817 "listen_addresses": [ 00:13:03.817 { 00:13:03.817 "trtype": "VFIOUSER", 00:13:03.817 "adrfam": "IPv4", 00:13:03.817 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:03.817 "trsvcid": "0" 00:13:03.817 } 00:13:03.817 ], 00:13:03.817 "allow_any_host": true, 00:13:03.817 "hosts": [], 00:13:03.817 "serial_number": "SPDK1", 00:13:03.817 "model_number": "SPDK bdev Controller", 00:13:03.817 "max_namespaces": 32, 00:13:03.817 "min_cntlid": 1, 00:13:03.817 "max_cntlid": 65519, 00:13:03.817 "namespaces": [ 00:13:03.817 { 00:13:03.817 "nsid": 1, 00:13:03.817 "bdev_name": "Malloc1", 00:13:03.817 "name": "Malloc1", 00:13:03.817 "nguid": "3383A137C9E94F418F0853492C165810", 00:13:03.817 "uuid": "3383a137-c9e9-4f41-8f08-53492c165810" 00:13:03.817 } 00:13:03.817 ] 00:13:03.817 }, 00:13:03.817 { 00:13:03.817 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:03.817 "subtype": "NVMe", 00:13:03.817 "listen_addresses": [ 00:13:03.817 { 00:13:03.817 "trtype": "VFIOUSER", 00:13:03.817 "adrfam": "IPv4", 00:13:03.817 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:03.817 "trsvcid": "0" 00:13:03.817 } 00:13:03.818 ], 00:13:03.818 "allow_any_host": true, 00:13:03.818 "hosts": [], 00:13:03.818 "serial_number": "SPDK2", 00:13:03.818 "model_number": "SPDK bdev Controller", 00:13:03.818 "max_namespaces": 32, 00:13:03.818 "min_cntlid": 1, 00:13:03.818 "max_cntlid": 65519, 00:13:03.818 "namespaces": [ 00:13:03.818 { 00:13:03.818 "nsid": 1, 00:13:03.818 "bdev_name": "Malloc2", 00:13:03.818 "name": "Malloc2", 00:13:03.818 "nguid": "3FF0612BD2DE4BD6BA41658031EC2DF6", 00:13:03.818 "uuid": "3ff0612b-d2de-4bd6-ba41-658031ec2df6" 00:13:03.818 } 00:13:03.818 ] 00:13:03.818 } 00:13:03.818 ] 00:13:03.818 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:03.818 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2369507 00:13:03.818 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:03.818 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:03.818 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:03.818 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:03.818 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:03.818 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:03.818 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:03.818 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:04.076 [2024-12-10 04:00:58.324100] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:04.076 Malloc3 00:13:04.076 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:04.333 [2024-12-10 04:00:58.701032] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:04.591 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:04.591 Asynchronous Event Request test 00:13:04.591 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:04.591 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:04.591 Registering asynchronous event callbacks... 00:13:04.591 Starting namespace attribute notice tests for all controllers... 00:13:04.591 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:04.591 aer_cb - Changed Namespace 00:13:04.591 Cleaning up... 00:13:04.851 [ 00:13:04.851 { 00:13:04.851 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:04.851 "subtype": "Discovery", 00:13:04.851 "listen_addresses": [], 00:13:04.851 "allow_any_host": true, 00:13:04.851 "hosts": [] 00:13:04.851 }, 00:13:04.851 { 00:13:04.851 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:04.851 "subtype": "NVMe", 00:13:04.851 "listen_addresses": [ 00:13:04.851 { 00:13:04.851 "trtype": "VFIOUSER", 00:13:04.851 "adrfam": "IPv4", 00:13:04.851 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:04.851 "trsvcid": "0" 00:13:04.851 } 00:13:04.851 ], 00:13:04.851 "allow_any_host": true, 00:13:04.851 "hosts": [], 00:13:04.851 "serial_number": "SPDK1", 00:13:04.851 "model_number": "SPDK bdev Controller", 00:13:04.851 "max_namespaces": 32, 00:13:04.851 "min_cntlid": 1, 00:13:04.851 "max_cntlid": 65519, 00:13:04.851 "namespaces": [ 00:13:04.851 { 00:13:04.851 "nsid": 1, 00:13:04.851 "bdev_name": "Malloc1", 00:13:04.851 "name": "Malloc1", 00:13:04.851 "nguid": "3383A137C9E94F418F0853492C165810", 00:13:04.851 "uuid": "3383a137-c9e9-4f41-8f08-53492c165810" 00:13:04.851 }, 00:13:04.851 { 00:13:04.851 "nsid": 2, 00:13:04.851 "bdev_name": "Malloc3", 00:13:04.851 "name": "Malloc3", 00:13:04.851 "nguid": "3B67B2114BAD437798C59A1617DC8D99", 00:13:04.851 "uuid": "3b67b211-4bad-4377-98c5-9a1617dc8d99" 00:13:04.851 } 00:13:04.851 ] 00:13:04.851 }, 00:13:04.851 { 00:13:04.851 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:04.851 "subtype": "NVMe", 00:13:04.851 "listen_addresses": [ 00:13:04.851 { 00:13:04.851 "trtype": "VFIOUSER", 00:13:04.851 "adrfam": "IPv4", 00:13:04.851 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:04.851 "trsvcid": "0" 00:13:04.851 } 00:13:04.851 ], 00:13:04.851 "allow_any_host": true, 00:13:04.851 "hosts": [], 00:13:04.851 "serial_number": "SPDK2", 00:13:04.851 "model_number": "SPDK bdev Controller", 00:13:04.851 "max_namespaces": 32, 00:13:04.851 "min_cntlid": 1, 00:13:04.851 "max_cntlid": 65519, 00:13:04.851 "namespaces": [ 00:13:04.851 { 00:13:04.851 "nsid": 1, 00:13:04.851 "bdev_name": "Malloc2", 00:13:04.851 "name": "Malloc2", 00:13:04.851 "nguid": "3FF0612BD2DE4BD6BA41658031EC2DF6", 00:13:04.851 "uuid": "3ff0612b-d2de-4bd6-ba41-658031ec2df6" 00:13:04.851 } 00:13:04.851 ] 00:13:04.851 } 00:13:04.851 ] 00:13:04.851 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2369507 00:13:04.851 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:04.851 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:04.851 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:04.851 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:04.851 [2024-12-10 04:00:59.013976] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:04.851 [2024-12-10 04:00:59.014029] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369574 ] 00:13:04.851 [2024-12-10 04:00:59.062326] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:04.851 [2024-12-10 04:00:59.074923] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:04.851 [2024-12-10 04:00:59.074961] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2e76566000 00:13:04.851 [2024-12-10 04:00:59.075918] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:04.851 [2024-12-10 04:00:59.076919] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:04.851 [2024-12-10 04:00:59.077945] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:04.851 [2024-12-10 04:00:59.078936] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:04.851 [2024-12-10 04:00:59.079944] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:04.851 [2024-12-10 04:00:59.080949] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:04.851 [2024-12-10 04:00:59.081955] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:04.851 [2024-12-10 04:00:59.082965] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:04.851 [2024-12-10 04:00:59.083975] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:04.851 [2024-12-10 04:00:59.083997] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2e7655b000 00:13:04.851 [2024-12-10 04:00:59.085113] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:04.851 [2024-12-10 04:00:59.098911] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:04.851 [2024-12-10 04:00:59.098948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:04.851 [2024-12-10 04:00:59.101039] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:04.851 [2024-12-10 04:00:59.101093] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:04.851 [2024-12-10 04:00:59.101182] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:04.851 [2024-12-10 04:00:59.101205] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:04.851 [2024-12-10 04:00:59.101216] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:04.851 [2024-12-10 04:00:59.102050] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:04.851 [2024-12-10 04:00:59.102076] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:04.851 [2024-12-10 04:00:59.102090] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:04.851 [2024-12-10 04:00:59.103057] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:04.851 [2024-12-10 04:00:59.103078] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:04.851 [2024-12-10 04:00:59.103092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:04.851 [2024-12-10 04:00:59.104064] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:04.851 [2024-12-10 04:00:59.104085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:04.851 [2024-12-10 04:00:59.105075] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:04.851 [2024-12-10 04:00:59.105095] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:04.851 [2024-12-10 04:00:59.105104] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:04.851 [2024-12-10 04:00:59.105115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:04.851 [2024-12-10 04:00:59.105225] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:04.851 [2024-12-10 04:00:59.105233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:04.851 [2024-12-10 04:00:59.105241] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:04.851 [2024-12-10 04:00:59.109557] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:04.851 [2024-12-10 04:00:59.110116] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:04.851 [2024-12-10 04:00:59.111125] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:04.851 [2024-12-10 04:00:59.112118] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:04.851 [2024-12-10 04:00:59.112202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:04.851 [2024-12-10 04:00:59.113132] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:04.851 [2024-12-10 04:00:59.113152] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:04.851 [2024-12-10 04:00:59.113161] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:04.851 [2024-12-10 04:00:59.113184] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:04.851 [2024-12-10 04:00:59.113201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.113226] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:04.852 [2024-12-10 04:00:59.113235] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:04.852 [2024-12-10 04:00:59.113242] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:04.852 [2024-12-10 04:00:59.113260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:04.852 [2024-12-10 04:00:59.120561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:04.852 [2024-12-10 04:00:59.120591] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:04.852 [2024-12-10 04:00:59.120604] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:04.852 [2024-12-10 04:00:59.120613] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:04.852 [2024-12-10 04:00:59.120621] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:04.852 [2024-12-10 04:00:59.120629] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:04.852 [2024-12-10 04:00:59.120638] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:04.852 [2024-12-10 04:00:59.120645] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.120659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.120675] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:04.852 [2024-12-10 04:00:59.128559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:04.852 [2024-12-10 04:00:59.128584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.852 [2024-12-10 04:00:59.128598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.852 [2024-12-10 04:00:59.128611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.852 [2024-12-10 04:00:59.128623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.852 [2024-12-10 04:00:59.128632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.128649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.128665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:04.852 [2024-12-10 04:00:59.136573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:04.852 [2024-12-10 04:00:59.136591] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:04.852 [2024-12-10 04:00:59.136600] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.136612] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.136622] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.136635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:04.852 [2024-12-10 04:00:59.144560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:04.852 [2024-12-10 04:00:59.144638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.144657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.144679] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:04.852 [2024-12-10 04:00:59.144689] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:04.852 [2024-12-10 04:00:59.144696] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:04.852 [2024-12-10 04:00:59.144706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:04.852 [2024-12-10 04:00:59.152573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:04.852 [2024-12-10 04:00:59.152598] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:04.852 [2024-12-10 04:00:59.152617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.152632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.152646] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:04.852 [2024-12-10 04:00:59.152654] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:04.852 [2024-12-10 04:00:59.152660] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:04.852 [2024-12-10 04:00:59.152669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:04.852 [2024-12-10 04:00:59.160569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:04.852 [2024-12-10 04:00:59.160601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.160619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.160633] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:04.852 [2024-12-10 04:00:59.160641] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:04.852 [2024-12-10 04:00:59.160647] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:04.852 [2024-12-10 04:00:59.160657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:04.852 [2024-12-10 04:00:59.168571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:04.852 [2024-12-10 04:00:59.168593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.168606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.168622] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.168636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.168646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.168654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.168667] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:04.852 [2024-12-10 04:00:59.168675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:04.852 [2024-12-10 04:00:59.168683] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:04.852 [2024-12-10 04:00:59.168708] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:04.852 [2024-12-10 04:00:59.176558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:04.852 [2024-12-10 04:00:59.176583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:04.852 [2024-12-10 04:00:59.184556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:04.852 [2024-12-10 04:00:59.184582] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:04.852 [2024-12-10 04:00:59.192572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:04.852 [2024-12-10 04:00:59.192597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:04.852 [2024-12-10 04:00:59.200561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:04.852 [2024-12-10 04:00:59.200592] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:04.852 [2024-12-10 04:00:59.200604] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:04.852 [2024-12-10 04:00:59.200610] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:04.852 [2024-12-10 04:00:59.200616] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:04.852 [2024-12-10 04:00:59.200622] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:04.852 [2024-12-10 04:00:59.200631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:04.852 [2024-12-10 04:00:59.200643] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:04.852 [2024-12-10 04:00:59.200651] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:04.852 [2024-12-10 04:00:59.200657] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:04.852 [2024-12-10 04:00:59.200666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:04.852 [2024-12-10 04:00:59.200677] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:04.852 [2024-12-10 04:00:59.200685] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:04.852 [2024-12-10 04:00:59.200691] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:04.852 [2024-12-10 04:00:59.200699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:04.852 [2024-12-10 04:00:59.200711] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:04.852 [2024-12-10 04:00:59.200719] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:04.853 [2024-12-10 04:00:59.200725] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:04.853 [2024-12-10 04:00:59.200738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:04.853 [2024-12-10 04:00:59.208557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:04.853 [2024-12-10 04:00:59.208586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:04.853 [2024-12-10 04:00:59.208604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:04.853 [2024-12-10 04:00:59.208616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:04.853 ===================================================== 00:13:04.853 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:04.853 ===================================================== 00:13:04.853 Controller Capabilities/Features 00:13:04.853 ================================ 00:13:04.853 Vendor ID: 4e58 00:13:04.853 Subsystem Vendor ID: 4e58 00:13:04.853 Serial Number: SPDK2 00:13:04.853 Model Number: SPDK bdev Controller 00:13:04.853 Firmware Version: 25.01 00:13:04.853 Recommended Arb Burst: 6 00:13:04.853 IEEE OUI Identifier: 8d 6b 50 00:13:04.853 Multi-path I/O 00:13:04.853 May have multiple subsystem ports: Yes 00:13:04.853 May have multiple controllers: Yes 00:13:04.853 Associated with SR-IOV VF: No 00:13:04.853 Max Data Transfer Size: 131072 00:13:04.853 Max Number of Namespaces: 32 00:13:04.853 Max Number of I/O Queues: 127 00:13:04.853 NVMe Specification Version (VS): 1.3 00:13:04.853 NVMe Specification Version (Identify): 1.3 00:13:04.853 Maximum Queue Entries: 256 00:13:04.853 Contiguous Queues Required: Yes 00:13:04.853 Arbitration Mechanisms Supported 00:13:04.853 Weighted Round Robin: Not Supported 00:13:04.853 Vendor Specific: Not Supported 00:13:04.853 Reset Timeout: 15000 ms 00:13:04.853 Doorbell Stride: 4 bytes 00:13:04.853 NVM Subsystem Reset: Not Supported 00:13:04.853 Command Sets Supported 00:13:04.853 NVM Command Set: Supported 00:13:04.853 Boot Partition: Not Supported 00:13:04.853 Memory Page Size Minimum: 4096 bytes 00:13:04.853 Memory Page Size Maximum: 4096 bytes 00:13:04.853 Persistent Memory Region: Not Supported 00:13:04.853 Optional Asynchronous Events Supported 00:13:04.853 Namespace Attribute Notices: Supported 00:13:04.853 Firmware Activation Notices: Not Supported 00:13:04.853 ANA Change Notices: Not Supported 00:13:04.853 PLE Aggregate Log Change Notices: Not Supported 00:13:04.853 LBA Status Info Alert Notices: Not Supported 00:13:04.853 EGE Aggregate Log Change Notices: Not Supported 00:13:04.853 Normal NVM Subsystem Shutdown event: Not Supported 00:13:04.853 Zone Descriptor Change Notices: Not Supported 00:13:04.853 Discovery Log Change Notices: Not Supported 00:13:04.853 Controller Attributes 00:13:04.853 128-bit Host Identifier: Supported 00:13:04.853 Non-Operational Permissive Mode: Not Supported 00:13:04.853 NVM Sets: Not Supported 00:13:04.853 Read Recovery Levels: Not Supported 00:13:04.853 Endurance Groups: Not Supported 00:13:04.853 Predictable Latency Mode: Not Supported 00:13:04.853 Traffic Based Keep ALive: Not Supported 00:13:04.853 Namespace Granularity: Not Supported 00:13:04.853 SQ Associations: Not Supported 00:13:04.853 UUID List: Not Supported 00:13:04.853 Multi-Domain Subsystem: Not Supported 00:13:04.853 Fixed Capacity Management: Not Supported 00:13:04.853 Variable Capacity Management: Not Supported 00:13:04.853 Delete Endurance Group: Not Supported 00:13:04.853 Delete NVM Set: Not Supported 00:13:04.853 Extended LBA Formats Supported: Not Supported 00:13:04.853 Flexible Data Placement Supported: Not Supported 00:13:04.853 00:13:04.853 Controller Memory Buffer Support 00:13:04.853 ================================ 00:13:04.853 Supported: No 00:13:04.853 00:13:04.853 Persistent Memory Region Support 00:13:04.853 ================================ 00:13:04.853 Supported: No 00:13:04.853 00:13:04.853 Admin Command Set Attributes 00:13:04.853 ============================ 00:13:04.853 Security Send/Receive: Not Supported 00:13:04.853 Format NVM: Not Supported 00:13:04.853 Firmware Activate/Download: Not Supported 00:13:04.853 Namespace Management: Not Supported 00:13:04.853 Device Self-Test: Not Supported 00:13:04.853 Directives: Not Supported 00:13:04.853 NVMe-MI: Not Supported 00:13:04.853 Virtualization Management: Not Supported 00:13:04.853 Doorbell Buffer Config: Not Supported 00:13:04.853 Get LBA Status Capability: Not Supported 00:13:04.853 Command & Feature Lockdown Capability: Not Supported 00:13:04.853 Abort Command Limit: 4 00:13:04.853 Async Event Request Limit: 4 00:13:04.853 Number of Firmware Slots: N/A 00:13:04.853 Firmware Slot 1 Read-Only: N/A 00:13:04.853 Firmware Activation Without Reset: N/A 00:13:04.853 Multiple Update Detection Support: N/A 00:13:04.853 Firmware Update Granularity: No Information Provided 00:13:04.853 Per-Namespace SMART Log: No 00:13:04.853 Asymmetric Namespace Access Log Page: Not Supported 00:13:04.853 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:04.853 Command Effects Log Page: Supported 00:13:04.853 Get Log Page Extended Data: Supported 00:13:04.853 Telemetry Log Pages: Not Supported 00:13:04.853 Persistent Event Log Pages: Not Supported 00:13:04.853 Supported Log Pages Log Page: May Support 00:13:04.853 Commands Supported & Effects Log Page: Not Supported 00:13:04.853 Feature Identifiers & Effects Log Page:May Support 00:13:04.853 NVMe-MI Commands & Effects Log Page: May Support 00:13:04.853 Data Area 4 for Telemetry Log: Not Supported 00:13:04.853 Error Log Page Entries Supported: 128 00:13:04.853 Keep Alive: Supported 00:13:04.853 Keep Alive Granularity: 10000 ms 00:13:04.853 00:13:04.853 NVM Command Set Attributes 00:13:04.853 ========================== 00:13:04.853 Submission Queue Entry Size 00:13:04.853 Max: 64 00:13:04.853 Min: 64 00:13:04.853 Completion Queue Entry Size 00:13:04.853 Max: 16 00:13:04.853 Min: 16 00:13:04.853 Number of Namespaces: 32 00:13:04.853 Compare Command: Supported 00:13:04.853 Write Uncorrectable Command: Not Supported 00:13:04.853 Dataset Management Command: Supported 00:13:04.853 Write Zeroes Command: Supported 00:13:04.853 Set Features Save Field: Not Supported 00:13:04.853 Reservations: Not Supported 00:13:04.853 Timestamp: Not Supported 00:13:04.853 Copy: Supported 00:13:04.853 Volatile Write Cache: Present 00:13:04.853 Atomic Write Unit (Normal): 1 00:13:04.853 Atomic Write Unit (PFail): 1 00:13:04.853 Atomic Compare & Write Unit: 1 00:13:04.853 Fused Compare & Write: Supported 00:13:04.853 Scatter-Gather List 00:13:04.853 SGL Command Set: Supported (Dword aligned) 00:13:04.853 SGL Keyed: Not Supported 00:13:04.853 SGL Bit Bucket Descriptor: Not Supported 00:13:04.853 SGL Metadata Pointer: Not Supported 00:13:04.853 Oversized SGL: Not Supported 00:13:04.853 SGL Metadata Address: Not Supported 00:13:04.853 SGL Offset: Not Supported 00:13:04.853 Transport SGL Data Block: Not Supported 00:13:04.853 Replay Protected Memory Block: Not Supported 00:13:04.853 00:13:04.853 Firmware Slot Information 00:13:04.853 ========================= 00:13:04.853 Active slot: 1 00:13:04.853 Slot 1 Firmware Revision: 25.01 00:13:04.853 00:13:04.853 00:13:04.853 Commands Supported and Effects 00:13:04.853 ============================== 00:13:04.853 Admin Commands 00:13:04.853 -------------- 00:13:04.853 Get Log Page (02h): Supported 00:13:04.853 Identify (06h): Supported 00:13:04.853 Abort (08h): Supported 00:13:04.853 Set Features (09h): Supported 00:13:04.853 Get Features (0Ah): Supported 00:13:04.853 Asynchronous Event Request (0Ch): Supported 00:13:04.853 Keep Alive (18h): Supported 00:13:04.853 I/O Commands 00:13:04.853 ------------ 00:13:04.853 Flush (00h): Supported LBA-Change 00:13:04.853 Write (01h): Supported LBA-Change 00:13:04.853 Read (02h): Supported 00:13:04.853 Compare (05h): Supported 00:13:04.853 Write Zeroes (08h): Supported LBA-Change 00:13:04.853 Dataset Management (09h): Supported LBA-Change 00:13:04.853 Copy (19h): Supported LBA-Change 00:13:04.853 00:13:04.853 Error Log 00:13:04.853 ========= 00:13:04.853 00:13:04.853 Arbitration 00:13:04.853 =========== 00:13:04.854 Arbitration Burst: 1 00:13:04.854 00:13:04.854 Power Management 00:13:04.854 ================ 00:13:04.854 Number of Power States: 1 00:13:04.854 Current Power State: Power State #0 00:13:04.854 Power State #0: 00:13:04.854 Max Power: 0.00 W 00:13:04.854 Non-Operational State: Operational 00:13:04.854 Entry Latency: Not Reported 00:13:04.854 Exit Latency: Not Reported 00:13:04.854 Relative Read Throughput: 0 00:13:04.854 Relative Read Latency: 0 00:13:04.854 Relative Write Throughput: 0 00:13:04.854 Relative Write Latency: 0 00:13:04.854 Idle Power: Not Reported 00:13:04.854 Active Power: Not Reported 00:13:04.854 Non-Operational Permissive Mode: Not Supported 00:13:04.854 00:13:04.854 Health Information 00:13:04.854 ================== 00:13:04.854 Critical Warnings: 00:13:04.854 Available Spare Space: OK 00:13:04.854 Temperature: OK 00:13:04.854 Device Reliability: OK 00:13:04.854 Read Only: No 00:13:04.854 Volatile Memory Backup: OK 00:13:04.854 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:04.854 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:04.854 Available Spare: 0% 00:13:04.854 Available Sp[2024-12-10 04:00:59.208735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:04.854 [2024-12-10 04:00:59.216556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:04.854 [2024-12-10 04:00:59.216605] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:04.854 [2024-12-10 04:00:59.216624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.854 [2024-12-10 04:00:59.216635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.854 [2024-12-10 04:00:59.216644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.854 [2024-12-10 04:00:59.216653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.854 [2024-12-10 04:00:59.216739] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:04.854 [2024-12-10 04:00:59.216760] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:04.854 [2024-12-10 04:00:59.217744] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:04.854 [2024-12-10 04:00:59.217833] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:04.854 [2024-12-10 04:00:59.217863] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:04.854 [2024-12-10 04:00:59.218752] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:04.854 [2024-12-10 04:00:59.218776] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:04.854 [2024-12-10 04:00:59.218829] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:04.854 [2024-12-10 04:00:59.220021] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:05.112 are Threshold: 0% 00:13:05.112 Life Percentage Used: 0% 00:13:05.112 Data Units Read: 0 00:13:05.112 Data Units Written: 0 00:13:05.112 Host Read Commands: 0 00:13:05.112 Host Write Commands: 0 00:13:05.112 Controller Busy Time: 0 minutes 00:13:05.112 Power Cycles: 0 00:13:05.112 Power On Hours: 0 hours 00:13:05.112 Unsafe Shutdowns: 0 00:13:05.112 Unrecoverable Media Errors: 0 00:13:05.112 Lifetime Error Log Entries: 0 00:13:05.112 Warning Temperature Time: 0 minutes 00:13:05.112 Critical Temperature Time: 0 minutes 00:13:05.112 00:13:05.112 Number of Queues 00:13:05.112 ================ 00:13:05.112 Number of I/O Submission Queues: 127 00:13:05.112 Number of I/O Completion Queues: 127 00:13:05.112 00:13:05.112 Active Namespaces 00:13:05.112 ================= 00:13:05.112 Namespace ID:1 00:13:05.112 Error Recovery Timeout: Unlimited 00:13:05.112 Command Set Identifier: NVM (00h) 00:13:05.112 Deallocate: Supported 00:13:05.112 Deallocated/Unwritten Error: Not Supported 00:13:05.112 Deallocated Read Value: Unknown 00:13:05.112 Deallocate in Write Zeroes: Not Supported 00:13:05.112 Deallocated Guard Field: 0xFFFF 00:13:05.112 Flush: Supported 00:13:05.112 Reservation: Supported 00:13:05.112 Namespace Sharing Capabilities: Multiple Controllers 00:13:05.112 Size (in LBAs): 131072 (0GiB) 00:13:05.112 Capacity (in LBAs): 131072 (0GiB) 00:13:05.112 Utilization (in LBAs): 131072 (0GiB) 00:13:05.112 NGUID: 3FF0612BD2DE4BD6BA41658031EC2DF6 00:13:05.112 UUID: 3ff0612b-d2de-4bd6-ba41-658031ec2df6 00:13:05.112 Thin Provisioning: Not Supported 00:13:05.112 Per-NS Atomic Units: Yes 00:13:05.112 Atomic Boundary Size (Normal): 0 00:13:05.112 Atomic Boundary Size (PFail): 0 00:13:05.112 Atomic Boundary Offset: 0 00:13:05.112 Maximum Single Source Range Length: 65535 00:13:05.112 Maximum Copy Length: 65535 00:13:05.112 Maximum Source Range Count: 1 00:13:05.112 NGUID/EUI64 Never Reused: No 00:13:05.112 Namespace Write Protected: No 00:13:05.112 Number of LBA Formats: 1 00:13:05.112 Current LBA Format: LBA Format #00 00:13:05.112 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:05.112 00:13:05.112 04:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:05.112 [2024-12-10 04:00:59.456313] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:10.394 Initializing NVMe Controllers 00:13:10.394 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:10.395 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:10.395 Initialization complete. Launching workers. 00:13:10.395 ======================================================== 00:13:10.395 Latency(us) 00:13:10.395 Device Information : IOPS MiB/s Average min max 00:13:10.395 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31212.37 121.92 4100.32 1226.19 8098.30 00:13:10.395 ======================================================== 00:13:10.395 Total : 31212.37 121.92 4100.32 1226.19 8098.30 00:13:10.395 00:13:10.395 [2024-12-10 04:01:04.562949] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:10.395 04:01:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:10.652 [2024-12-10 04:01:04.826648] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:15.916 Initializing NVMe Controllers 00:13:15.916 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:15.916 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:15.916 Initialization complete. Launching workers. 00:13:15.916 ======================================================== 00:13:15.916 Latency(us) 00:13:15.916 Device Information : IOPS MiB/s Average min max 00:13:15.916 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30405.81 118.77 4209.87 1226.59 11327.75 00:13:15.916 ======================================================== 00:13:15.916 Total : 30405.81 118.77 4209.87 1226.59 11327.75 00:13:15.916 00:13:15.916 [2024-12-10 04:01:09.853249] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:15.916 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:15.916 [2024-12-10 04:01:10.084147] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:21.180 [2024-12-10 04:01:15.214698] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:21.180 Initializing NVMe Controllers 00:13:21.180 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:21.180 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:21.180 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:21.180 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:21.180 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:21.180 Initialization complete. Launching workers. 00:13:21.180 Starting thread on core 2 00:13:21.180 Starting thread on core 3 00:13:21.181 Starting thread on core 1 00:13:21.181 04:01:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:21.181 [2024-12-10 04:01:15.536038] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:24.465 [2024-12-10 04:01:18.591726] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:24.465 Initializing NVMe Controllers 00:13:24.465 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:24.465 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:24.465 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:24.465 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:24.465 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:24.465 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:24.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:24.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:24.465 Initialization complete. Launching workers. 00:13:24.465 Starting thread on core 1 with urgent priority queue 00:13:24.465 Starting thread on core 2 with urgent priority queue 00:13:24.465 Starting thread on core 3 with urgent priority queue 00:13:24.465 Starting thread on core 0 with urgent priority queue 00:13:24.465 SPDK bdev Controller (SPDK2 ) core 0: 4597.67 IO/s 21.75 secs/100000 ios 00:13:24.465 SPDK bdev Controller (SPDK2 ) core 1: 4864.00 IO/s 20.56 secs/100000 ios 00:13:24.465 SPDK bdev Controller (SPDK2 ) core 2: 5336.33 IO/s 18.74 secs/100000 ios 00:13:24.465 SPDK bdev Controller (SPDK2 ) core 3: 5717.33 IO/s 17.49 secs/100000 ios 00:13:24.465 ======================================================== 00:13:24.465 00:13:24.465 04:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:24.724 [2024-12-10 04:01:18.911590] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:24.724 Initializing NVMe Controllers 00:13:24.724 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:24.724 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:24.724 Namespace ID: 1 size: 0GB 00:13:24.724 Initialization complete. 00:13:24.724 INFO: using host memory buffer for IO 00:13:24.724 Hello world! 00:13:24.724 [2024-12-10 04:01:18.924884] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:24.724 04:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:24.982 [2024-12-10 04:01:19.237506] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:26.357 Initializing NVMe Controllers 00:13:26.357 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:26.357 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:26.357 Initialization complete. Launching workers. 00:13:26.357 submit (in ns) avg, min, max = 9784.3, 3514.4, 5005046.7 00:13:26.357 complete (in ns) avg, min, max = 28128.2, 2064.4, 7047583.3 00:13:26.357 00:13:26.357 Submit histogram 00:13:26.357 ================ 00:13:26.357 Range in us Cumulative Count 00:13:26.357 3.508 - 3.532: 0.2062% ( 25) 00:13:26.357 3.532 - 3.556: 0.6267% ( 51) 00:13:26.357 3.556 - 3.579: 2.2759% ( 200) 00:13:26.357 3.579 - 3.603: 4.9311% ( 322) 00:13:26.357 3.603 - 3.627: 10.1839% ( 637) 00:13:26.357 3.627 - 3.650: 17.3497% ( 869) 00:13:26.357 3.650 - 3.674: 26.3049% ( 1086) 00:13:26.357 3.674 - 3.698: 33.3966% ( 860) 00:13:26.357 3.698 - 3.721: 40.5871% ( 872) 00:13:26.357 3.721 - 3.745: 46.0543% ( 663) 00:13:26.357 3.745 - 3.769: 51.1503% ( 618) 00:13:26.357 3.769 - 3.793: 55.8918% ( 575) 00:13:26.357 3.793 - 3.816: 59.5366% ( 442) 00:13:26.357 3.816 - 3.840: 63.7256% ( 508) 00:13:26.357 3.840 - 3.864: 67.9476% ( 512) 00:13:26.357 3.864 - 3.887: 72.1036% ( 504) 00:13:26.357 3.887 - 3.911: 76.2266% ( 500) 00:13:26.357 3.911 - 3.935: 80.0775% ( 467) 00:13:26.357 3.935 - 3.959: 82.7822% ( 328) 00:13:26.357 3.959 - 3.982: 84.8025% ( 245) 00:13:26.357 3.982 - 4.006: 86.6579% ( 225) 00:13:26.357 4.006 - 4.030: 88.1834% ( 185) 00:13:26.357 4.030 - 4.053: 89.3791% ( 145) 00:13:26.357 4.053 - 4.077: 90.4593% ( 131) 00:13:26.358 4.077 - 4.101: 91.4488% ( 120) 00:13:26.358 4.101 - 4.124: 92.3312% ( 107) 00:13:26.358 4.124 - 4.148: 93.1310% ( 97) 00:13:26.358 4.148 - 4.172: 93.8732% ( 90) 00:13:26.358 4.172 - 4.196: 94.4009% ( 64) 00:13:26.358 4.196 - 4.219: 94.7720% ( 45) 00:13:26.358 4.219 - 4.243: 95.1018% ( 40) 00:13:26.358 4.243 - 4.267: 95.3162% ( 26) 00:13:26.358 4.267 - 4.290: 95.5636% ( 30) 00:13:26.358 4.290 - 4.314: 95.7120% ( 18) 00:13:26.358 4.314 - 4.338: 95.8605% ( 18) 00:13:26.358 4.338 - 4.361: 95.9842% ( 15) 00:13:26.358 4.361 - 4.385: 96.0831% ( 12) 00:13:26.358 4.385 - 4.409: 96.2068% ( 15) 00:13:26.358 4.409 - 4.433: 96.3305% ( 15) 00:13:26.358 4.433 - 4.456: 96.4459% ( 14) 00:13:26.358 4.456 - 4.480: 96.5284% ( 10) 00:13:26.358 4.480 - 4.504: 96.5779% ( 6) 00:13:26.358 4.504 - 4.527: 96.6274% ( 6) 00:13:26.358 4.527 - 4.551: 96.6603% ( 4) 00:13:26.358 4.551 - 4.575: 96.7016% ( 5) 00:13:26.358 4.575 - 4.599: 96.7346% ( 4) 00:13:26.358 4.599 - 4.622: 96.7840% ( 6) 00:13:26.358 4.622 - 4.646: 96.8005% ( 2) 00:13:26.358 4.646 - 4.670: 96.8088% ( 1) 00:13:26.358 4.693 - 4.717: 96.8170% ( 1) 00:13:26.358 4.717 - 4.741: 96.8335% ( 2) 00:13:26.358 4.741 - 4.764: 96.8500% ( 2) 00:13:26.358 4.764 - 4.788: 96.8830% ( 4) 00:13:26.358 4.788 - 4.812: 96.9077% ( 3) 00:13:26.358 4.812 - 4.836: 96.9490% ( 5) 00:13:26.358 4.836 - 4.859: 96.9572% ( 1) 00:13:26.358 4.859 - 4.883: 97.0149% ( 7) 00:13:26.358 4.883 - 4.907: 97.0479% ( 4) 00:13:26.358 4.907 - 4.930: 97.1056% ( 7) 00:13:26.358 4.930 - 4.954: 97.1386% ( 4) 00:13:26.358 4.954 - 4.978: 97.1881% ( 6) 00:13:26.358 4.978 - 5.001: 97.2046% ( 2) 00:13:26.358 5.001 - 5.025: 97.2870% ( 10) 00:13:26.358 5.025 - 5.049: 97.3283% ( 5) 00:13:26.358 5.049 - 5.073: 97.3942% ( 8) 00:13:26.358 5.073 - 5.096: 97.5179% ( 15) 00:13:26.358 5.096 - 5.120: 97.5839% ( 8) 00:13:26.358 5.120 - 5.144: 97.6334% ( 6) 00:13:26.358 5.144 - 5.167: 97.6993% ( 8) 00:13:26.358 5.167 - 5.191: 97.7241% ( 3) 00:13:26.358 5.191 - 5.215: 97.7653% ( 5) 00:13:26.358 5.215 - 5.239: 97.7983% ( 4) 00:13:26.358 5.239 - 5.262: 97.8313% ( 4) 00:13:26.358 5.262 - 5.286: 97.8725% ( 5) 00:13:26.358 5.286 - 5.310: 97.9137% ( 5) 00:13:26.358 5.310 - 5.333: 97.9302% ( 2) 00:13:26.358 5.333 - 5.357: 97.9715% ( 5) 00:13:26.358 5.357 - 5.381: 98.0292% ( 7) 00:13:26.358 5.381 - 5.404: 98.0622% ( 4) 00:13:26.358 5.404 - 5.428: 98.0869% ( 3) 00:13:26.358 5.452 - 5.476: 98.0952% ( 1) 00:13:26.358 5.476 - 5.499: 98.1034% ( 1) 00:13:26.358 5.499 - 5.523: 98.1281% ( 3) 00:13:26.358 5.547 - 5.570: 98.1364% ( 1) 00:13:26.358 5.570 - 5.594: 98.1529% ( 2) 00:13:26.358 5.594 - 5.618: 98.1611% ( 1) 00:13:26.358 5.618 - 5.641: 98.1776% ( 2) 00:13:26.358 5.641 - 5.665: 98.1859% ( 1) 00:13:26.358 5.713 - 5.736: 98.2106% ( 3) 00:13:26.358 5.784 - 5.807: 98.2189% ( 1) 00:13:26.358 5.902 - 5.926: 98.2271% ( 1) 00:13:26.358 5.926 - 5.950: 98.2353% ( 1) 00:13:26.358 5.973 - 5.997: 98.2436% ( 1) 00:13:26.358 6.021 - 6.044: 98.2518% ( 1) 00:13:26.358 6.116 - 6.163: 98.2601% ( 1) 00:13:26.358 6.258 - 6.305: 98.2766% ( 2) 00:13:26.358 6.400 - 6.447: 98.2848% ( 1) 00:13:26.358 6.637 - 6.684: 98.2931% ( 1) 00:13:26.358 6.684 - 6.732: 98.3013% ( 1) 00:13:26.358 6.732 - 6.779: 98.3096% ( 1) 00:13:26.358 7.253 - 7.301: 98.3178% ( 1) 00:13:26.358 7.301 - 7.348: 98.3260% ( 1) 00:13:26.358 7.538 - 7.585: 98.3425% ( 2) 00:13:26.358 7.633 - 7.680: 98.3508% ( 1) 00:13:26.358 7.680 - 7.727: 98.3590% ( 1) 00:13:26.358 7.822 - 7.870: 98.3755% ( 2) 00:13:26.358 7.870 - 7.917: 98.3838% ( 1) 00:13:26.358 7.917 - 7.964: 98.3920% ( 1) 00:13:26.358 8.107 - 8.154: 98.4168% ( 3) 00:13:26.358 8.154 - 8.201: 98.4250% ( 1) 00:13:26.358 8.201 - 8.249: 98.4332% ( 1) 00:13:26.358 8.486 - 8.533: 98.4415% ( 1) 00:13:26.358 8.628 - 8.676: 98.4497% ( 1) 00:13:26.358 8.676 - 8.723: 98.4580% ( 1) 00:13:26.358 8.770 - 8.818: 98.4827% ( 3) 00:13:26.358 8.818 - 8.865: 98.4910% ( 1) 00:13:26.358 8.865 - 8.913: 98.4992% ( 1) 00:13:26.358 8.960 - 9.007: 98.5075% ( 1) 00:13:26.358 9.197 - 9.244: 98.5240% ( 2) 00:13:26.358 9.481 - 9.529: 98.5322% ( 1) 00:13:26.358 9.624 - 9.671: 98.5404% ( 1) 00:13:26.358 9.766 - 9.813: 98.5487% ( 1) 00:13:26.358 10.050 - 10.098: 98.5569% ( 1) 00:13:26.358 10.193 - 10.240: 98.5652% ( 1) 00:13:26.358 10.335 - 10.382: 98.5817% ( 2) 00:13:26.358 10.382 - 10.430: 98.5899% ( 1) 00:13:26.358 10.477 - 10.524: 98.6064% ( 2) 00:13:26.358 10.524 - 10.572: 98.6147% ( 1) 00:13:26.358 10.572 - 10.619: 98.6229% ( 1) 00:13:26.358 10.809 - 10.856: 98.6312% ( 1) 00:13:26.358 10.999 - 11.046: 98.6394% ( 1) 00:13:26.358 11.093 - 11.141: 98.6559% ( 2) 00:13:26.358 11.520 - 11.567: 98.6641% ( 1) 00:13:26.358 11.662 - 11.710: 98.6724% ( 1) 00:13:26.358 11.710 - 11.757: 98.6806% ( 1) 00:13:26.358 11.994 - 12.041: 98.6889% ( 1) 00:13:26.358 12.516 - 12.610: 98.6971% ( 1) 00:13:26.358 13.179 - 13.274: 98.7054% ( 1) 00:13:26.358 13.653 - 13.748: 98.7219% ( 2) 00:13:26.358 13.748 - 13.843: 98.7384% ( 2) 00:13:26.358 13.938 - 14.033: 98.7466% ( 1) 00:13:26.358 14.317 - 14.412: 98.7548% ( 1) 00:13:26.358 15.076 - 15.170: 98.7631% ( 1) 00:13:26.358 16.972 - 17.067: 98.7713% ( 1) 00:13:26.358 17.067 - 17.161: 98.7796% ( 1) 00:13:26.358 17.161 - 17.256: 98.7878% ( 1) 00:13:26.358 17.256 - 17.351: 98.7961% ( 1) 00:13:26.358 17.351 - 17.446: 98.8126% ( 2) 00:13:26.358 17.446 - 17.541: 98.8373% ( 3) 00:13:26.358 17.541 - 17.636: 98.8785% ( 5) 00:13:26.358 17.636 - 17.730: 98.9115% ( 4) 00:13:26.358 17.730 - 17.825: 98.9528% ( 5) 00:13:26.358 17.825 - 17.920: 99.0187% ( 8) 00:13:26.358 17.920 - 18.015: 99.0599% ( 5) 00:13:26.358 18.015 - 18.110: 99.1094% ( 6) 00:13:26.358 18.110 - 18.204: 99.2166% ( 13) 00:13:26.358 18.204 - 18.299: 99.2991% ( 10) 00:13:26.358 18.299 - 18.394: 99.3651% ( 8) 00:13:26.358 18.394 - 18.489: 99.4558% ( 11) 00:13:26.358 18.489 - 18.584: 99.5300% ( 9) 00:13:26.358 18.584 - 18.679: 99.5712% ( 5) 00:13:26.358 18.679 - 18.773: 99.6372% ( 8) 00:13:26.358 18.773 - 18.868: 99.6866% ( 6) 00:13:26.358 18.868 - 18.963: 99.7279% ( 5) 00:13:26.358 18.963 - 19.058: 99.7361% ( 1) 00:13:26.358 19.058 - 19.153: 99.7444% ( 1) 00:13:26.358 19.153 - 19.247: 99.7526% ( 1) 00:13:26.358 19.247 - 19.342: 99.7609% ( 1) 00:13:26.358 19.342 - 19.437: 99.7691% ( 1) 00:13:26.358 19.627 - 19.721: 99.7774% ( 1) 00:13:26.358 19.721 - 19.816: 99.7856% ( 1) 00:13:26.358 20.101 - 20.196: 99.7938% ( 1) 00:13:26.358 20.764 - 20.859: 99.8021% ( 1) 00:13:26.358 21.144 - 21.239: 99.8103% ( 1) 00:13:26.358 22.281 - 22.376: 99.8268% ( 2) 00:13:26.358 22.661 - 22.756: 99.8351% ( 1) 00:13:26.358 25.221 - 25.410: 99.8433% ( 1) 00:13:26.358 26.738 - 26.927: 99.8516% ( 1) 00:13:26.358 28.634 - 28.824: 99.8598% ( 1) 00:13:26.358 3980.705 - 4004.978: 99.9340% ( 9) 00:13:26.358 4004.978 - 4029.250: 99.9835% ( 6) 00:13:26.358 4975.881 - 5000.154: 99.9918% ( 1) 00:13:26.358 5000.154 - 5024.427: 100.0000% ( 1) 00:13:26.358 00:13:26.358 Complete histogram 00:13:26.358 ================== 00:13:26.358 Range in us Cumulative Count 00:13:26.358 2.062 - 2.074: 4.5271% ( 549) 00:13:26.358 2.074 - 2.086: 27.7150% ( 2812) 00:13:26.358 2.086 - 2.098: 30.4610% ( 333) 00:13:26.358 2.098 - 2.110: 40.5459% ( 1223) 00:13:26.358 2.110 - 2.121: 51.1503% ( 1286) 00:13:26.358 2.121 - 2.133: 52.7418% ( 193) 00:13:26.358 2.133 - 2.145: 60.1303% ( 896) 00:13:26.358 2.145 - 2.157: 67.0817% ( 843) 00:13:26.358 2.157 - 2.169: 68.1042% ( 124) 00:13:26.358 2.169 - 2.181: 73.7280% ( 682) 00:13:26.358 2.181 - 2.193: 76.7708% ( 369) 00:13:26.358 2.193 - 2.204: 77.4223% ( 79) 00:13:26.358 2.204 - 2.216: 80.4816% ( 371) 00:13:26.358 2.216 - 2.228: 84.8190% ( 526) 00:13:26.358 2.228 - 2.240: 86.7816% ( 238) 00:13:26.358 2.240 - 2.252: 89.3378% ( 310) 00:13:26.358 2.252 - 2.264: 91.0860% ( 212) 00:13:26.358 2.264 - 2.276: 91.3994% ( 38) 00:13:26.358 2.276 - 2.287: 91.8694% ( 57) 00:13:26.358 2.287 - 2.299: 92.6280% ( 92) 00:13:26.358 2.299 - 2.311: 93.5516% ( 112) 00:13:26.358 2.311 - 2.323: 93.7742% ( 27) 00:13:26.358 2.323 - 2.335: 93.8484% ( 9) 00:13:26.358 2.335 - 2.347: 93.9556% ( 13) 00:13:26.358 2.347 - 2.359: 94.0628% ( 13) 00:13:26.358 2.359 - 2.370: 94.2030% ( 17) 00:13:26.358 2.370 - 2.382: 94.4339% ( 28) 00:13:26.358 2.382 - 2.394: 94.7473% ( 38) 00:13:26.358 2.394 - 2.406: 94.9699% ( 27) 00:13:26.358 2.406 - 2.418: 95.1925% ( 27) 00:13:26.359 2.418 - 2.430: 95.4152% ( 27) 00:13:26.359 2.430 - 2.441: 95.6048% ( 23) 00:13:26.359 2.441 - 2.453: 95.8028% ( 24) 00:13:26.359 2.453 - 2.465: 95.9842% ( 22) 00:13:26.359 2.465 - 2.477: 96.1656% ( 22) 00:13:26.359 2.477 - 2.489: 96.3387% ( 21) 00:13:26.359 2.489 - 2.501: 96.5449% ( 25) 00:13:26.359 2.501 - 2.513: 96.6768% ( 16) 00:13:26.359 2.513 - 2.524: 96.8170% ( 17) 00:13:26.359 2.524 - 2.536: 96.9572% ( 17) 00:13:26.359 2.536 - 2.548: 97.0314% ( 9) 00:13:26.359 2.548 - 2.560: 97.1304% ( 12) 00:13:26.359 2.560 - 2.572: 97.2128% ( 10) 00:13:26.359 2.572 - 2.584: 97.2541% ( 5) 00:13:26.359 2.584 - 2.596: 97.2706% ( 2) 00:13:26.359 2.596 - 2.607: 97.3035% ( 4) 00:13:26.359 2.607 - 2.619: 97.3283% ( 3) 00:13:26.359 2.619 - 2.631: 97.4025% ( 9) 00:13:26.359 2.631 - 2.643: 97.4355% ( 4) 00:13:26.359 2.643 - 2.655: 97.4767% ( 5) 00:13:26.359 2.655 - 2.667: 97.5014% ( 3) 00:13:26.359 2.667 - 2.679: 97.5262% ( 3) 00:13:26.359 2.679 - 2.690: 97.5427% ( 2) 00:13:26.359 2.702 - 2.714: 97.5509% ( 1) 00:13:26.359 2.714 - 2.726: 97.5592% ( 1) 00:13:26.359 2.726 - 2.738: 97.5674% ( 1) 00:13:26.359 2.738 - 2.750: 97.6086% ( 5) 00:13:26.359 2.750 - 2.761: 97.6169% ( 1) 00:13:26.359 2.773 - 2.785: 97.6416% ( 3) 00:13:26.359 2.785 - 2.797: 97.6581% ( 2) 00:13:26.359 2.797 - 2.809: 97.6829% ( 3) 00:13:26.359 2.809 - 2.821: 97.6993% ( 2) 00:13:26.359 2.821 - 2.833: 97.7158% ( 2) 00:13:26.359 2.833 - 2.844: 97.7406% ( 3) 00:13:26.359 2.856 - 2.868: 97.7653% ( 3) 00:13:26.359 2.868 - 2.880: 97.7818% ( 2) 00:13:26.359 2.880 - 2.892: 97.7901% ( 1) 00:13:26.359 2.892 - 2.904: 97.8065% ( 2) 00:13:26.359 2.916 - 2.927: 97.8148% ( 1) 00:13:26.359 2.927 - 2.939: 97.8230% ( 1) 00:13:26.359 2.939 - 2.951: 97.8313% ( 1) 00:13:26.359 2.951 - 2.963: 97.8478% ( 2) 00:13:26.359 2.963 - 2.975: 97.8643% ( 2) 00:13:26.359 2.987 - 2.999: 97.8808% ( 2) 00:13:26.359 2.999 - 3.010: 97.8973% ( 2) 00:13:26.359 3.010 - 3.022: 97.9055% ( 1) 00:13:26.359 3.022 - 3.034: 97.9302% ( 3) 00:13:26.359 3.034 - 3.058: 97.9880% ( 7) 00:13:26.359 3.058 - 3.081: 98.0209% ( 4) 00:13:26.359 3.081 - 3.105: 98.0374% ( 2) 00:13:26.359 3.105 - 3.129: 98.0539% ( 2) 00:13:26.359 3.129 - 3.153: 98.0787% ( 3) 00:13:26.359 3.153 - 3.176: 98.0952% ( 2) 00:13:26.359 3.176 - 3.200: 98.1117% ( 2) 00:13:26.359 3.200 - 3.224: 98.1281% ( 2) 00:13:26.359 3.224 - 3.247: 98.1611% ( 4) 00:13:26.359 3.271 - 3.295: 98.1694% ( 1) 00:13:26.359 3.319 - 3.342: 98.1776% ( 1) 00:13:26.359 3.342 - 3.366: 98.1859% ( 1) 00:13:26.359 3.390 - 3.413: 98.1941% ( 1) 00:13:26.359 3.437 - 3.461: 98.2024% ( 1) 00:13:26.359 3.461 - 3.484: 98.2106% ( 1) 00:13:26.359 3.484 - 3.508: 98.2271% ( 2) 00:13:26.359 3.508 - 3.532: 98.2353% ( 1) 00:13:26.359 3.532 - 3.556: 98.2518% ( 2) 00:13:26.359 3.603 - 3.627: 98.2766% ( 3) 00:13:26.359 3.674 - 3.698: 98.3096% ( 4) 00:13:26.359 3.698 - 3.721: 98.3260% ( 2) 00:13:26.359 3.769 - 3.793: 98.3343% ( 1) 00:13:26.359 3.840 - 3.864: 98.3590% ( 3) 00:13:26.359 3.864 - 3.887: 98.3673% ( 1) 00:13:26.359 3.982 - 4.006: 98.3755% ( 1) 00:13:26.359 4.030 - 4.053: 98.3920% ( 2) 00:13:26.359 4.101 - 4.124: 98.4085% ( 2) 00:13:26.359 4.148 - 4.172: 98.4168% ( 1) 00:13:26.359 4.172 - 4.196: 98.4250% ( 1) 00:13:26.359 4.196 - 4.219: 98.4332% ( 1) 00:13:26.359 4.409 - 4.433: 98.4415% ( 1) 00:13:26.359 4.764 - 4.788: 98.4580% ( 2) 00:13:26.359 5.144 - 5.167: 98.4662% ( 1) 00:13:26.359 5.736 - 5.760: 98.4745% ( 1) 00:13:26.359 5.784 - 5.807: 98.4827% ( 1) 00:13:26.359 5.831 - 5.855: 98.4910% ( 1) 00:13:26.359 6.210 - 6.258: 98.5075% ( 2) 00:13:26.359 6.590 - 6.637: 98.5157% ( 1) 00:13:26.359 6.684 - 6.732: 98.5240% ( 1) 00:13:26.359 6.874 - 6.921: 98.5322% ( 1) 00:13:26.359 6.969 - 7.016: 98.5404% ( 1) 00:13:26.359 7.016 - 7.064: 98.5487% ( 1) 00:13:26.359 7.253 - 7.301: 98.5569% ( 1) 00:13:26.359 7.348 - 7.396: 98.5652% ( 1) 00:13:26.359 7.585 - 7.633: 98.5899% ( 3) 00:13:26.359 7.775 - 7.822: 98.5982% ( 1) 00:13:26.359 8.107 - 8.154: 98.6064% ( 1) 00:13:26.359 8.154 - 8.201: 98.6147% ( 1) 00:13:26.359 8.249 - 8.296: 98.6229% ( 1) 00:13:26.359 8.628 - 8.676: 98.6312% ( 1) 00:13:26.359 8.865 - 8.913: 98.6476% ( 2) 00:13:26.359 9.719 - 9.766: 98.6559% ( 1) 00:13:26.359 9.956 - 10.003: 98.6641% ( 1) 00:13:26.359 10.003 - 10.050: 98.6724% ( 1) 00:13:26.359 10.572 - 10.619: 98.6889% ( 2) 00:13:26.359 15.455 - 15.550: 98.6971% ( 1) 00:13:26.359 15.550 - 15.644: 98.7136% ( 2) 00:13:26.359 15.644 - 15.739: 98.7219% ( 1) 00:13:26.359 15.739 - 15.834: 98.7466% ( 3) 00:13:26.359 15.834 - 15.929: 98.7631% ( 2) 00:13:26.359 15.929 - 16.024: 98.7713% ( 1) 00:13:26.359 16.024 - 16.119: 98.8126% ( 5) 00:13:26.359 16.119 - 16.213: 98.8373% ( 3) 00:13:26.359 16.213 - 16.308: 98.8620% ( 3) 00:13:26.359 16.308 - 16.403: 98.8868% ( 3) 00:13:26.359 16.403 - 16.498: 98.9115% ( 3) 00:13:26.359 16.498 - 16.593: 98.9775% ( 8) 00:13:26.359 16.593 - 16.687: 99.0517% ( 9) 00:13:26.359 16.687 - 16.782: 99.0764% ( 3) 00:13:26.359 16.782 - 16.877: 99.1259% ( 6) 00:13:26.359 16.877 - 16.972: 99.1589% ( 4) 00:13:26.359 16.972 - 17.067: 99.1919% ( 4) 00:13:26.359 17.067 - 17.161: 99.2001% ( 1) 00:13:26.359 17.256 - 17.351: 99.2084% ( 1) 00:13:26.359 17.351 - 17.446: 99.2249% ( 2) 00:13:26.359 17.446 - 17.541: 99.2331% ( 1) 00:13:26.359 17.541 - 17.636: 99.2496% ( 2) 00:13:26.359 17.730 - 17.825: 99.2826% ( 4) 00:13:26.359 17.920 - 18.015: 99.2908%[2024-12-10 04:01:20.333510] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:26.359 ( 1) 00:13:26.359 18.204 - 18.299: 99.2991% ( 1) 00:13:26.359 18.299 - 18.394: 99.3073% ( 1) 00:13:26.359 18.394 - 18.489: 99.3156% ( 1) 00:13:26.359 18.773 - 18.868: 99.3238% ( 1) 00:13:26.359 19.627 - 19.721: 99.3321% ( 1) 00:13:26.359 21.428 - 21.523: 99.3403% ( 1) 00:13:26.359 23.799 - 23.893: 99.3486% ( 1) 00:13:26.359 24.273 - 24.462: 99.3568% ( 1) 00:13:26.359 3021.938 - 3034.074: 99.3733% ( 2) 00:13:26.359 3094.756 - 3106.892: 99.3815% ( 1) 00:13:26.359 3980.705 - 4004.978: 99.7196% ( 41) 00:13:26.359 4004.978 - 4029.250: 99.9835% ( 32) 00:13:26.359 5000.154 - 5024.427: 99.9918% ( 1) 00:13:26.359 7039.052 - 7087.597: 100.0000% ( 1) 00:13:26.359 00:13:26.359 04:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:26.359 04:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:26.359 04:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:26.359 04:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:26.359 04:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:26.359 [ 00:13:26.359 { 00:13:26.359 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:26.359 "subtype": "Discovery", 00:13:26.359 "listen_addresses": [], 00:13:26.359 "allow_any_host": true, 00:13:26.359 "hosts": [] 00:13:26.359 }, 00:13:26.359 { 00:13:26.359 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:26.359 "subtype": "NVMe", 00:13:26.359 "listen_addresses": [ 00:13:26.359 { 00:13:26.359 "trtype": "VFIOUSER", 00:13:26.359 "adrfam": "IPv4", 00:13:26.359 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:26.359 "trsvcid": "0" 00:13:26.359 } 00:13:26.359 ], 00:13:26.359 "allow_any_host": true, 00:13:26.359 "hosts": [], 00:13:26.359 "serial_number": "SPDK1", 00:13:26.359 "model_number": "SPDK bdev Controller", 00:13:26.359 "max_namespaces": 32, 00:13:26.359 "min_cntlid": 1, 00:13:26.359 "max_cntlid": 65519, 00:13:26.359 "namespaces": [ 00:13:26.359 { 00:13:26.359 "nsid": 1, 00:13:26.359 "bdev_name": "Malloc1", 00:13:26.359 "name": "Malloc1", 00:13:26.359 "nguid": "3383A137C9E94F418F0853492C165810", 00:13:26.359 "uuid": "3383a137-c9e9-4f41-8f08-53492c165810" 00:13:26.359 }, 00:13:26.359 { 00:13:26.359 "nsid": 2, 00:13:26.359 "bdev_name": "Malloc3", 00:13:26.359 "name": "Malloc3", 00:13:26.359 "nguid": "3B67B2114BAD437798C59A1617DC8D99", 00:13:26.359 "uuid": "3b67b211-4bad-4377-98c5-9a1617dc8d99" 00:13:26.359 } 00:13:26.359 ] 00:13:26.359 }, 00:13:26.359 { 00:13:26.359 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:26.359 "subtype": "NVMe", 00:13:26.359 "listen_addresses": [ 00:13:26.359 { 00:13:26.359 "trtype": "VFIOUSER", 00:13:26.359 "adrfam": "IPv4", 00:13:26.359 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:26.359 "trsvcid": "0" 00:13:26.359 } 00:13:26.359 ], 00:13:26.359 "allow_any_host": true, 00:13:26.359 "hosts": [], 00:13:26.359 "serial_number": "SPDK2", 00:13:26.359 "model_number": "SPDK bdev Controller", 00:13:26.360 "max_namespaces": 32, 00:13:26.360 "min_cntlid": 1, 00:13:26.360 "max_cntlid": 65519, 00:13:26.360 "namespaces": [ 00:13:26.360 { 00:13:26.360 "nsid": 1, 00:13:26.360 "bdev_name": "Malloc2", 00:13:26.360 "name": "Malloc2", 00:13:26.360 "nguid": "3FF0612BD2DE4BD6BA41658031EC2DF6", 00:13:26.360 "uuid": "3ff0612b-d2de-4bd6-ba41-658031ec2df6" 00:13:26.360 } 00:13:26.360 ] 00:13:26.360 } 00:13:26.360 ] 00:13:26.360 04:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:26.360 04:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2372162 00:13:26.360 04:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:26.360 04:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:26.360 04:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:26.360 04:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:26.360 04:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:26.360 04:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:26.360 04:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:26.360 04:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:26.618 [2024-12-10 04:01:20.887159] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:26.618 Malloc4 00:13:26.876 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:26.876 [2024-12-10 04:01:21.256045] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:27.134 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:27.134 Asynchronous Event Request test 00:13:27.134 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:27.134 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:27.134 Registering asynchronous event callbacks... 00:13:27.134 Starting namespace attribute notice tests for all controllers... 00:13:27.135 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:27.135 aer_cb - Changed Namespace 00:13:27.135 Cleaning up... 00:13:27.393 [ 00:13:27.393 { 00:13:27.393 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:27.393 "subtype": "Discovery", 00:13:27.393 "listen_addresses": [], 00:13:27.393 "allow_any_host": true, 00:13:27.393 "hosts": [] 00:13:27.393 }, 00:13:27.393 { 00:13:27.393 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:27.393 "subtype": "NVMe", 00:13:27.393 "listen_addresses": [ 00:13:27.393 { 00:13:27.393 "trtype": "VFIOUSER", 00:13:27.393 "adrfam": "IPv4", 00:13:27.393 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:27.393 "trsvcid": "0" 00:13:27.393 } 00:13:27.393 ], 00:13:27.393 "allow_any_host": true, 00:13:27.393 "hosts": [], 00:13:27.393 "serial_number": "SPDK1", 00:13:27.393 "model_number": "SPDK bdev Controller", 00:13:27.393 "max_namespaces": 32, 00:13:27.393 "min_cntlid": 1, 00:13:27.393 "max_cntlid": 65519, 00:13:27.393 "namespaces": [ 00:13:27.393 { 00:13:27.393 "nsid": 1, 00:13:27.393 "bdev_name": "Malloc1", 00:13:27.393 "name": "Malloc1", 00:13:27.393 "nguid": "3383A137C9E94F418F0853492C165810", 00:13:27.393 "uuid": "3383a137-c9e9-4f41-8f08-53492c165810" 00:13:27.393 }, 00:13:27.393 { 00:13:27.393 "nsid": 2, 00:13:27.393 "bdev_name": "Malloc3", 00:13:27.393 "name": "Malloc3", 00:13:27.393 "nguid": "3B67B2114BAD437798C59A1617DC8D99", 00:13:27.393 "uuid": "3b67b211-4bad-4377-98c5-9a1617dc8d99" 00:13:27.393 } 00:13:27.393 ] 00:13:27.393 }, 00:13:27.393 { 00:13:27.393 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:27.393 "subtype": "NVMe", 00:13:27.393 "listen_addresses": [ 00:13:27.393 { 00:13:27.393 "trtype": "VFIOUSER", 00:13:27.393 "adrfam": "IPv4", 00:13:27.393 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:27.393 "trsvcid": "0" 00:13:27.393 } 00:13:27.393 ], 00:13:27.393 "allow_any_host": true, 00:13:27.393 "hosts": [], 00:13:27.393 "serial_number": "SPDK2", 00:13:27.393 "model_number": "SPDK bdev Controller", 00:13:27.393 "max_namespaces": 32, 00:13:27.393 "min_cntlid": 1, 00:13:27.393 "max_cntlid": 65519, 00:13:27.393 "namespaces": [ 00:13:27.393 { 00:13:27.393 "nsid": 1, 00:13:27.393 "bdev_name": "Malloc2", 00:13:27.393 "name": "Malloc2", 00:13:27.393 "nguid": "3FF0612BD2DE4BD6BA41658031EC2DF6", 00:13:27.393 "uuid": "3ff0612b-d2de-4bd6-ba41-658031ec2df6" 00:13:27.393 }, 00:13:27.393 { 00:13:27.393 "nsid": 2, 00:13:27.393 "bdev_name": "Malloc4", 00:13:27.393 "name": "Malloc4", 00:13:27.393 "nguid": "E7ABC7BA4326413FA299DDB4A5A4DAFD", 00:13:27.393 "uuid": "e7abc7ba-4326-413f-a299-ddb4a5a4dafd" 00:13:27.393 } 00:13:27.393 ] 00:13:27.393 } 00:13:27.393 ] 00:13:27.393 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2372162 00:13:27.393 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:27.393 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2366449 00:13:27.393 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2366449 ']' 00:13:27.393 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2366449 00:13:27.393 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:27.393 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.393 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2366449 00:13:27.393 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.393 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.393 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2366449' 00:13:27.393 killing process with pid 2366449 00:13:27.393 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2366449 00:13:27.393 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2366449 00:13:27.652 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:27.652 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:27.652 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:27.652 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:27.652 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:27.652 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2372300 00:13:27.652 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:27.652 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2372300' 00:13:27.652 Process pid: 2372300 00:13:27.652 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:27.652 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2372300 00:13:27.652 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2372300 ']' 00:13:27.652 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.652 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.652 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.652 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.652 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:27.652 [2024-12-10 04:01:21.960672] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:27.652 [2024-12-10 04:01:21.961736] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:27.652 [2024-12-10 04:01:21.961802] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.652 [2024-12-10 04:01:22.027812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.911 [2024-12-10 04:01:22.083898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.911 [2024-12-10 04:01:22.083949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.911 [2024-12-10 04:01:22.083978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.911 [2024-12-10 04:01:22.083996] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.911 [2024-12-10 04:01:22.084006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.911 [2024-12-10 04:01:22.085416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.911 [2024-12-10 04:01:22.085511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.911 [2024-12-10 04:01:22.085548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.911 [2024-12-10 04:01:22.085553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.911 [2024-12-10 04:01:22.179324] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:27.911 [2024-12-10 04:01:22.179585] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:27.911 [2024-12-10 04:01:22.179837] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:27.911 [2024-12-10 04:01:22.180526] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:27.911 [2024-12-10 04:01:22.180772] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:27.911 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.911 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:27.911 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:28.847 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:29.415 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:29.415 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:29.415 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:29.415 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:29.415 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:29.674 Malloc1 00:13:29.674 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:29.932 04:01:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:30.198 04:01:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:30.517 04:01:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:30.517 04:01:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:30.517 04:01:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:30.881 Malloc2 00:13:30.881 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:31.138 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:31.394 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:31.652 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:31.652 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2372300 00:13:31.652 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2372300 ']' 00:13:31.652 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2372300 00:13:31.652 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:31.652 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.652 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2372300 00:13:31.652 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.652 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.652 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2372300' 00:13:31.652 killing process with pid 2372300 00:13:31.652 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2372300 00:13:31.652 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2372300 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:32.220 00:13:32.220 real 0m53.693s 00:13:32.220 user 3m27.833s 00:13:32.220 sys 0m3.957s 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:32.220 ************************************ 00:13:32.220 END TEST nvmf_vfio_user 00:13:32.220 ************************************ 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:32.220 ************************************ 00:13:32.220 START TEST nvmf_vfio_user_nvme_compliance 00:13:32.220 ************************************ 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:32.220 * Looking for test storage... 00:13:32.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:32.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.220 --rc genhtml_branch_coverage=1 00:13:32.220 --rc genhtml_function_coverage=1 00:13:32.220 --rc genhtml_legend=1 00:13:32.220 --rc geninfo_all_blocks=1 00:13:32.220 --rc geninfo_unexecuted_blocks=1 00:13:32.220 00:13:32.220 ' 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:32.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.220 --rc genhtml_branch_coverage=1 00:13:32.220 --rc genhtml_function_coverage=1 00:13:32.220 --rc genhtml_legend=1 00:13:32.220 --rc geninfo_all_blocks=1 00:13:32.220 --rc geninfo_unexecuted_blocks=1 00:13:32.220 00:13:32.220 ' 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:32.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.220 --rc genhtml_branch_coverage=1 00:13:32.220 --rc genhtml_function_coverage=1 00:13:32.220 --rc genhtml_legend=1 00:13:32.220 --rc geninfo_all_blocks=1 00:13:32.220 --rc geninfo_unexecuted_blocks=1 00:13:32.220 00:13:32.220 ' 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:32.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.220 --rc genhtml_branch_coverage=1 00:13:32.220 --rc genhtml_function_coverage=1 00:13:32.220 --rc genhtml_legend=1 00:13:32.220 --rc geninfo_all_blocks=1 00:13:32.220 --rc geninfo_unexecuted_blocks=1 00:13:32.220 00:13:32.220 ' 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.220 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:32.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2372918 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2372918' 00:13:32.221 Process pid: 2372918 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2372918 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2372918 ']' 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.221 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:32.221 [2024-12-10 04:01:26.579927] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:32.221 [2024-12-10 04:01:26.580003] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.480 [2024-12-10 04:01:26.653614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:32.480 [2024-12-10 04:01:26.715952] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.480 [2024-12-10 04:01:26.716008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.480 [2024-12-10 04:01:26.716038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.480 [2024-12-10 04:01:26.716050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.480 [2024-12-10 04:01:26.716060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.480 [2024-12-10 04:01:26.717642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.480 [2024-12-10 04:01:26.717673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.480 [2024-12-10 04:01:26.717678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.480 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.480 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:13:32.480 04:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.853 malloc0 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.853 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:33.853 00:13:33.853 00:13:33.853 CUnit - A unit testing framework for C - Version 2.1-3 00:13:33.853 http://cunit.sourceforge.net/ 00:13:33.853 00:13:33.853 00:13:33.853 Suite: nvme_compliance 00:13:33.853 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-10 04:01:28.091058] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.853 [2024-12-10 04:01:28.092564] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:33.853 [2024-12-10 04:01:28.092590] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:33.853 [2024-12-10 04:01:28.092604] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:33.853 [2024-12-10 04:01:28.097097] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.853 passed 00:13:33.853 Test: admin_identify_ctrlr_verify_fused ...[2024-12-10 04:01:28.181715] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.853 [2024-12-10 04:01:28.184729] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.853 passed 00:13:34.111 Test: admin_identify_ns ...[2024-12-10 04:01:28.273091] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.111 [2024-12-10 04:01:28.333580] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:34.111 [2024-12-10 04:01:28.341581] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:34.111 [2024-12-10 04:01:28.362706] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.111 passed 00:13:34.111 Test: admin_get_features_mandatory_features ...[2024-12-10 04:01:28.443333] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.111 [2024-12-10 04:01:28.448367] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.111 passed 00:13:34.369 Test: admin_get_features_optional_features ...[2024-12-10 04:01:28.531973] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.369 [2024-12-10 04:01:28.534995] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.369 passed 00:13:34.369 Test: admin_set_features_number_of_queues ...[2024-12-10 04:01:28.619092] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.369 [2024-12-10 04:01:28.723666] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.627 passed 00:13:34.627 Test: admin_get_log_page_mandatory_logs ...[2024-12-10 04:01:28.807555] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.627 [2024-12-10 04:01:28.810571] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.627 passed 00:13:34.627 Test: admin_get_log_page_with_lpo ...[2024-12-10 04:01:28.894095] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.627 [2024-12-10 04:01:28.961560] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:34.627 [2024-12-10 04:01:28.974622] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.885 passed 00:13:34.885 Test: fabric_property_get ...[2024-12-10 04:01:29.060023] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.885 [2024-12-10 04:01:29.061296] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:34.885 [2024-12-10 04:01:29.063041] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.885 passed 00:13:34.885 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-10 04:01:29.146586] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.885 [2024-12-10 04:01:29.147926] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:34.885 [2024-12-10 04:01:29.149627] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.885 passed 00:13:34.885 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-10 04:01:29.235763] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:35.143 [2024-12-10 04:01:29.319556] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:35.143 [2024-12-10 04:01:29.335555] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:35.143 [2024-12-10 04:01:29.340683] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:35.143 passed 00:13:35.143 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-10 04:01:29.424379] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:35.143 [2024-12-10 04:01:29.425713] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:35.143 [2024-12-10 04:01:29.427403] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:35.143 passed 00:13:35.143 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-10 04:01:29.507598] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:35.401 [2024-12-10 04:01:29.585558] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:35.401 [2024-12-10 04:01:29.609572] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:35.401 [2024-12-10 04:01:29.614655] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:35.401 passed 00:13:35.401 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-10 04:01:29.698351] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:35.401 [2024-12-10 04:01:29.699676] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:35.401 [2024-12-10 04:01:29.699715] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:35.401 [2024-12-10 04:01:29.701373] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:35.401 passed 00:13:35.401 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-10 04:01:29.782722] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:35.659 [2024-12-10 04:01:29.874557] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:35.659 [2024-12-10 04:01:29.882562] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:35.659 [2024-12-10 04:01:29.890568] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:35.659 [2024-12-10 04:01:29.898572] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:35.659 [2024-12-10 04:01:29.927690] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:35.659 passed 00:13:35.659 Test: admin_create_io_sq_verify_pc ...[2024-12-10 04:01:30.009826] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:35.659 [2024-12-10 04:01:30.026570] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:35.916 [2024-12-10 04:01:30.044280] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:35.916 passed 00:13:35.916 Test: admin_create_io_qp_max_qps ...[2024-12-10 04:01:30.129956] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:37.286 [2024-12-10 04:01:31.246564] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:13:37.286 [2024-12-10 04:01:31.617999] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:37.286 passed 00:13:37.544 Test: admin_create_io_sq_shared_cq ...[2024-12-10 04:01:31.700077] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:37.544 [2024-12-10 04:01:31.831559] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:37.544 [2024-12-10 04:01:31.865666] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:37.544 passed 00:13:37.544 00:13:37.544 Run Summary: Type Total Ran Passed Failed Inactive 00:13:37.544 suites 1 1 n/a 0 0 00:13:37.544 tests 18 18 18 0 0 00:13:37.544 asserts 360 360 360 0 n/a 00:13:37.544 00:13:37.544 Elapsed time = 1.561 seconds 00:13:37.544 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2372918 00:13:37.544 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2372918 ']' 00:13:37.544 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2372918 00:13:37.544 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:13:37.544 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:37.544 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2372918 00:13:37.802 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:37.802 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:37.802 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2372918' 00:13:37.802 killing process with pid 2372918 00:13:37.802 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2372918 00:13:37.802 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2372918 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:38.061 00:13:38.061 real 0m5.846s 00:13:38.061 user 0m16.399s 00:13:38.061 sys 0m0.560s 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:38.061 ************************************ 00:13:38.061 END TEST nvmf_vfio_user_nvme_compliance 00:13:38.061 ************************************ 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:38.061 ************************************ 00:13:38.061 START TEST nvmf_vfio_user_fuzz 00:13:38.061 ************************************ 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:38.061 * Looking for test storage... 00:13:38.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:38.061 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:38.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.061 --rc genhtml_branch_coverage=1 00:13:38.061 --rc genhtml_function_coverage=1 00:13:38.061 --rc genhtml_legend=1 00:13:38.061 --rc geninfo_all_blocks=1 00:13:38.061 --rc geninfo_unexecuted_blocks=1 00:13:38.062 00:13:38.062 ' 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:38.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.062 --rc genhtml_branch_coverage=1 00:13:38.062 --rc genhtml_function_coverage=1 00:13:38.062 --rc genhtml_legend=1 00:13:38.062 --rc geninfo_all_blocks=1 00:13:38.062 --rc geninfo_unexecuted_blocks=1 00:13:38.062 00:13:38.062 ' 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:38.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.062 --rc genhtml_branch_coverage=1 00:13:38.062 --rc genhtml_function_coverage=1 00:13:38.062 --rc genhtml_legend=1 00:13:38.062 --rc geninfo_all_blocks=1 00:13:38.062 --rc geninfo_unexecuted_blocks=1 00:13:38.062 00:13:38.062 ' 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:38.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.062 --rc genhtml_branch_coverage=1 00:13:38.062 --rc genhtml_function_coverage=1 00:13:38.062 --rc genhtml_legend=1 00:13:38.062 --rc geninfo_all_blocks=1 00:13:38.062 --rc geninfo_unexecuted_blocks=1 00:13:38.062 00:13:38.062 ' 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:38.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2373651 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2373651' 00:13:38.062 Process pid: 2373651 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2373651 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2373651 ']' 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.062 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:38.321 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.321 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:13:38.321 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:39.694 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:39.694 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.694 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:39.694 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.694 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:39.694 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:39.694 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.694 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:39.694 malloc0 00:13:39.694 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.694 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:39.694 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.694 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:39.694 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.694 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:39.694 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.695 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:39.695 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.695 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:39.695 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.695 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:39.695 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.695 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:39.695 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:11.771 Fuzzing completed. Shutting down the fuzz application 00:14:11.771 00:14:11.771 Dumping successful admin opcodes: 00:14:11.771 9, 10, 00:14:11.771 Dumping successful io opcodes: 00:14:11.771 0, 00:14:11.771 NS: 0x20000081ef00 I/O qp, Total commands completed: 681857, total successful commands: 2657, random_seed: 1827588224 00:14:11.771 NS: 0x20000081ef00 admin qp, Total commands completed: 161584, total successful commands: 37, random_seed: 2782424640 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2373651 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2373651 ']' 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2373651 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2373651 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2373651' 00:14:11.771 killing process with pid 2373651 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2373651 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2373651 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:11.771 00:14:11.771 real 0m32.245s 00:14:11.771 user 0m33.994s 00:14:11.771 sys 0m25.684s 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:11.771 ************************************ 00:14:11.771 END TEST nvmf_vfio_user_fuzz 00:14:11.771 ************************************ 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:11.771 ************************************ 00:14:11.771 START TEST nvmf_auth_target 00:14:11.771 ************************************ 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:11.771 * Looking for test storage... 00:14:11.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:11.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.771 --rc genhtml_branch_coverage=1 00:14:11.771 --rc genhtml_function_coverage=1 00:14:11.771 --rc genhtml_legend=1 00:14:11.771 --rc geninfo_all_blocks=1 00:14:11.771 --rc geninfo_unexecuted_blocks=1 00:14:11.771 00:14:11.771 ' 00:14:11.771 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:11.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.771 --rc genhtml_branch_coverage=1 00:14:11.771 --rc genhtml_function_coverage=1 00:14:11.771 --rc genhtml_legend=1 00:14:11.771 --rc geninfo_all_blocks=1 00:14:11.771 --rc geninfo_unexecuted_blocks=1 00:14:11.771 00:14:11.771 ' 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:11.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.772 --rc genhtml_branch_coverage=1 00:14:11.772 --rc genhtml_function_coverage=1 00:14:11.772 --rc genhtml_legend=1 00:14:11.772 --rc geninfo_all_blocks=1 00:14:11.772 --rc geninfo_unexecuted_blocks=1 00:14:11.772 00:14:11.772 ' 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:11.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.772 --rc genhtml_branch_coverage=1 00:14:11.772 --rc genhtml_function_coverage=1 00:14:11.772 --rc genhtml_legend=1 00:14:11.772 --rc geninfo_all_blocks=1 00:14:11.772 --rc geninfo_unexecuted_blocks=1 00:14:11.772 00:14:11.772 ' 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:11.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:11.772 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:12.709 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:12.709 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:12.709 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:12.709 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:12.710 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:12.710 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:12.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:14:12.710 00:14:12.710 --- 10.0.0.2 ping statistics --- 00:14:12.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.710 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:12.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:14:12.710 00:14:12.710 --- 10.0.0.1 ping statistics --- 00:14:12.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.710 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2378982 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2378982 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2378982 ']' 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.710 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2379124 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=38d3d55b22a949a6569992920dfc4f94666352dfdcc88a70 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.tQ2 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 38d3d55b22a949a6569992920dfc4f94666352dfdcc88a70 0 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 38d3d55b22a949a6569992920dfc4f94666352dfdcc88a70 0 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=38d3d55b22a949a6569992920dfc4f94666352dfdcc88a70 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.tQ2 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.tQ2 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.tQ2 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=df94ae84a1074642b267e7a6fdacd6a5abae1602dea429d5c955fa3e60d2cb64 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bPO 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key df94ae84a1074642b267e7a6fdacd6a5abae1602dea429d5c955fa3e60d2cb64 3 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 df94ae84a1074642b267e7a6fdacd6a5abae1602dea429d5c955fa3e60d2cb64 3 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=df94ae84a1074642b267e7a6fdacd6a5abae1602dea429d5c955fa3e60d2cb64 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bPO 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bPO 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.bPO 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7a2a010fbab6cc8fa2a575bd281649f9 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.uR8 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7a2a010fbab6cc8fa2a575bd281649f9 1 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7a2a010fbab6cc8fa2a575bd281649f9 1 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7a2a010fbab6cc8fa2a575bd281649f9 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.uR8 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.uR8 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.uR8 00:14:13.279 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=38d06dd659ea8111f9421d4cab69f173f154d7c6885f30f2 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4MZ 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 38d06dd659ea8111f9421d4cab69f173f154d7c6885f30f2 2 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 38d06dd659ea8111f9421d4cab69f173f154d7c6885f30f2 2 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=38d06dd659ea8111f9421d4cab69f173f154d7c6885f30f2 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4MZ 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4MZ 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.4MZ 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9895ae2bd87d12651705fef32166441d53552f90ade0076b 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Blp 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9895ae2bd87d12651705fef32166441d53552f90ade0076b 2 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9895ae2bd87d12651705fef32166441d53552f90ade0076b 2 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9895ae2bd87d12651705fef32166441d53552f90ade0076b 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Blp 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Blp 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Blp 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a31b976e9145d1481faf3ca3a4c0be4a 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.n7y 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a31b976e9145d1481faf3ca3a4c0be4a 1 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a31b976e9145d1481faf3ca3a4c0be4a 1 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a31b976e9145d1481faf3ca3a4c0be4a 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:13.280 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.n7y 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.n7y 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.n7y 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c6fd16f3f1233d078c797a532c56995a1f8b01a4b78d0ad2a79138512915e22c 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.jHb 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c6fd16f3f1233d078c797a532c56995a1f8b01a4b78d0ad2a79138512915e22c 3 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c6fd16f3f1233d078c797a532c56995a1f8b01a4b78d0ad2a79138512915e22c 3 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c6fd16f3f1233d078c797a532c56995a1f8b01a4b78d0ad2a79138512915e22c 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.jHb 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.jHb 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.jHb 00:14:13.539 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:13.540 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2378982 00:14:13.540 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2378982 ']' 00:14:13.540 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.540 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.540 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.540 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.540 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.798 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.798 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:13.798 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2379124 /var/tmp/host.sock 00:14:13.798 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2379124 ']' 00:14:13.798 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:13.798 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.798 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:13.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:13.798 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.798 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.056 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.056 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:14.056 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:14.056 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.056 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.056 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.056 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:14.056 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tQ2 00:14:14.056 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.056 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.056 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.056 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.tQ2 00:14:14.056 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.tQ2 00:14:14.315 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.bPO ]] 00:14:14.315 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bPO 00:14:14.315 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.315 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.315 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.315 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bPO 00:14:14.315 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bPO 00:14:14.590 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:14.590 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.uR8 00:14:14.590 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.590 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.590 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.590 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.uR8 00:14:14.590 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.uR8 00:14:14.852 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.4MZ ]] 00:14:14.852 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4MZ 00:14:14.852 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.852 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.852 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.852 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4MZ 00:14:14.852 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4MZ 00:14:15.419 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:15.419 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Blp 00:14:15.419 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.419 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.419 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.419 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Blp 00:14:15.419 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Blp 00:14:15.419 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.n7y ]] 00:14:15.419 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n7y 00:14:15.419 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.419 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.419 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.419 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n7y 00:14:15.419 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n7y 00:14:15.678 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:15.678 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.jHb 00:14:15.678 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.678 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.678 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.678 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.jHb 00:14:15.678 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.jHb 00:14:16.245 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:16.245 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:16.245 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:16.245 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.245 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:16.245 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:16.504 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:16.504 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.504 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:16.504 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:16.504 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:16.504 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.504 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.504 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.504 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.504 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.504 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.504 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.504 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.762 00:14:16.762 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.762 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.762 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.021 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.021 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.021 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.021 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.021 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.021 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.021 { 00:14:17.021 "cntlid": 1, 00:14:17.021 "qid": 0, 00:14:17.021 "state": "enabled", 00:14:17.021 "thread": "nvmf_tgt_poll_group_000", 00:14:17.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:17.021 "listen_address": { 00:14:17.021 "trtype": "TCP", 00:14:17.021 "adrfam": "IPv4", 00:14:17.021 "traddr": "10.0.0.2", 00:14:17.021 "trsvcid": "4420" 00:14:17.021 }, 00:14:17.021 "peer_address": { 00:14:17.021 "trtype": "TCP", 00:14:17.021 "adrfam": "IPv4", 00:14:17.021 "traddr": "10.0.0.1", 00:14:17.021 "trsvcid": "45494" 00:14:17.021 }, 00:14:17.021 "auth": { 00:14:17.021 "state": "completed", 00:14:17.021 "digest": "sha256", 00:14:17.021 "dhgroup": "null" 00:14:17.021 } 00:14:17.021 } 00:14:17.021 ]' 00:14:17.021 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.021 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.021 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.021 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:17.021 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.021 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.021 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.021 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.280 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:14:17.280 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:14:18.216 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.216 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.216 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.216 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.216 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.216 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.216 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:18.216 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:18.475 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:18.475 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.475 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:18.475 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:18.475 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:18.475 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.476 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.476 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.476 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.476 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.476 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.476 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.476 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.735 00:14:18.994 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.994 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.994 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.252 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.252 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.252 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.252 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.252 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.252 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.252 { 00:14:19.252 "cntlid": 3, 00:14:19.252 "qid": 0, 00:14:19.252 "state": "enabled", 00:14:19.252 "thread": "nvmf_tgt_poll_group_000", 00:14:19.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:19.252 "listen_address": { 00:14:19.252 "trtype": "TCP", 00:14:19.252 "adrfam": "IPv4", 00:14:19.252 "traddr": "10.0.0.2", 00:14:19.252 "trsvcid": "4420" 00:14:19.252 }, 00:14:19.252 "peer_address": { 00:14:19.252 "trtype": "TCP", 00:14:19.252 "adrfam": "IPv4", 00:14:19.252 "traddr": "10.0.0.1", 00:14:19.252 "trsvcid": "37808" 00:14:19.252 }, 00:14:19.252 "auth": { 00:14:19.252 "state": "completed", 00:14:19.252 "digest": "sha256", 00:14:19.252 "dhgroup": "null" 00:14:19.252 } 00:14:19.252 } 00:14:19.252 ]' 00:14:19.252 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.252 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:19.252 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.252 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:19.252 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.253 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.253 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.253 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.512 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:14:19.512 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:14:20.454 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.454 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:20.454 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.454 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.454 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.454 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.454 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:20.454 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:20.714 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:20.714 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.714 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:20.714 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:20.714 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:20.714 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.714 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.714 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.714 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.714 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.714 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.714 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.714 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.972 00:14:20.972 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.972 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.972 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.231 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.231 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.231 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.231 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.231 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.231 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.231 { 00:14:21.231 "cntlid": 5, 00:14:21.231 "qid": 0, 00:14:21.231 "state": "enabled", 00:14:21.231 "thread": "nvmf_tgt_poll_group_000", 00:14:21.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:21.231 "listen_address": { 00:14:21.231 "trtype": "TCP", 00:14:21.231 "adrfam": "IPv4", 00:14:21.231 "traddr": "10.0.0.2", 00:14:21.231 "trsvcid": "4420" 00:14:21.231 }, 00:14:21.231 "peer_address": { 00:14:21.231 "trtype": "TCP", 00:14:21.231 "adrfam": "IPv4", 00:14:21.231 "traddr": "10.0.0.1", 00:14:21.231 "trsvcid": "37824" 00:14:21.231 }, 00:14:21.231 "auth": { 00:14:21.231 "state": "completed", 00:14:21.231 "digest": "sha256", 00:14:21.231 "dhgroup": "null" 00:14:21.231 } 00:14:21.231 } 00:14:21.231 ]' 00:14:21.231 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.489 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.489 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.489 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:21.489 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.489 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.489 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.489 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.765 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:14:21.765 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:14:22.704 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.704 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:22.704 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.704 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.704 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.704 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.704 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:22.704 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:22.962 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:22.962 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.962 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:22.962 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:22.962 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:22.962 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.962 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:22.962 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.962 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.962 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.962 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:22.962 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:22.962 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:23.219 00:14:23.220 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.220 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.220 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.478 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.478 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.478 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.478 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.478 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.478 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.478 { 00:14:23.478 "cntlid": 7, 00:14:23.478 "qid": 0, 00:14:23.478 "state": "enabled", 00:14:23.478 "thread": "nvmf_tgt_poll_group_000", 00:14:23.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:23.478 "listen_address": { 00:14:23.478 "trtype": "TCP", 00:14:23.478 "adrfam": "IPv4", 00:14:23.478 "traddr": "10.0.0.2", 00:14:23.478 "trsvcid": "4420" 00:14:23.478 }, 00:14:23.478 "peer_address": { 00:14:23.478 "trtype": "TCP", 00:14:23.478 "adrfam": "IPv4", 00:14:23.478 "traddr": "10.0.0.1", 00:14:23.478 "trsvcid": "37868" 00:14:23.478 }, 00:14:23.478 "auth": { 00:14:23.478 "state": "completed", 00:14:23.478 "digest": "sha256", 00:14:23.478 "dhgroup": "null" 00:14:23.478 } 00:14:23.478 } 00:14:23.478 ]' 00:14:23.478 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.737 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:23.737 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.737 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:23.737 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.737 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.737 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.737 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.995 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:14:23.995 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:14:24.932 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.932 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.932 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.932 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.932 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.932 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:24.932 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.932 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:24.932 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.191 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:25.191 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.191 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:25.191 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:25.191 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:25.191 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.191 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.191 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.191 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.191 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.191 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.191 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.191 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.449 00:14:25.449 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.449 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.449 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.707 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.707 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.707 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.707 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.707 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.707 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.707 { 00:14:25.707 "cntlid": 9, 00:14:25.707 "qid": 0, 00:14:25.707 "state": "enabled", 00:14:25.707 "thread": "nvmf_tgt_poll_group_000", 00:14:25.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:25.707 "listen_address": { 00:14:25.707 "trtype": "TCP", 00:14:25.707 "adrfam": "IPv4", 00:14:25.707 "traddr": "10.0.0.2", 00:14:25.707 "trsvcid": "4420" 00:14:25.707 }, 00:14:25.707 "peer_address": { 00:14:25.707 "trtype": "TCP", 00:14:25.707 "adrfam": "IPv4", 00:14:25.707 "traddr": "10.0.0.1", 00:14:25.707 "trsvcid": "37876" 00:14:25.707 }, 00:14:25.707 "auth": { 00:14:25.707 "state": "completed", 00:14:25.707 "digest": "sha256", 00:14:25.707 "dhgroup": "ffdhe2048" 00:14:25.707 } 00:14:25.707 } 00:14:25.707 ]' 00:14:25.707 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.965 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.965 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.965 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:25.965 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.965 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.965 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.965 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.223 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:14:26.223 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:14:27.172 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.172 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.172 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.172 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.172 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.172 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.172 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:27.172 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:27.430 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:27.430 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.430 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:27.430 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:27.430 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:27.430 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.430 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.430 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.430 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.430 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.430 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.430 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.430 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.688 00:14:27.688 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.688 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.688 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.946 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.946 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.946 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.946 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.946 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.946 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.946 { 00:14:27.946 "cntlid": 11, 00:14:27.946 "qid": 0, 00:14:27.946 "state": "enabled", 00:14:27.946 "thread": "nvmf_tgt_poll_group_000", 00:14:27.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:27.946 "listen_address": { 00:14:27.946 "trtype": "TCP", 00:14:27.946 "adrfam": "IPv4", 00:14:27.946 "traddr": "10.0.0.2", 00:14:27.946 "trsvcid": "4420" 00:14:27.946 }, 00:14:27.946 "peer_address": { 00:14:27.946 "trtype": "TCP", 00:14:27.946 "adrfam": "IPv4", 00:14:27.946 "traddr": "10.0.0.1", 00:14:27.946 "trsvcid": "37898" 00:14:27.946 }, 00:14:27.946 "auth": { 00:14:27.946 "state": "completed", 00:14:27.946 "digest": "sha256", 00:14:27.946 "dhgroup": "ffdhe2048" 00:14:27.946 } 00:14:27.946 } 00:14:27.946 ]' 00:14:27.946 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.204 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.204 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.204 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:28.204 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.204 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.204 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.204 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.462 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:14:28.462 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:14:29.402 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.402 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.402 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.402 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.402 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.402 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.402 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:29.402 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:29.661 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:29.661 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.661 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:29.661 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:29.661 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:29.661 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.661 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.661 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.661 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.661 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.661 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.661 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.661 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.919 00:14:29.919 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.919 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.919 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.177 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.177 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.177 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.177 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.177 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.177 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.177 { 00:14:30.177 "cntlid": 13, 00:14:30.177 "qid": 0, 00:14:30.177 "state": "enabled", 00:14:30.177 "thread": "nvmf_tgt_poll_group_000", 00:14:30.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:30.177 "listen_address": { 00:14:30.177 "trtype": "TCP", 00:14:30.177 "adrfam": "IPv4", 00:14:30.177 "traddr": "10.0.0.2", 00:14:30.177 "trsvcid": "4420" 00:14:30.177 }, 00:14:30.177 "peer_address": { 00:14:30.177 "trtype": "TCP", 00:14:30.177 "adrfam": "IPv4", 00:14:30.177 "traddr": "10.0.0.1", 00:14:30.177 "trsvcid": "33084" 00:14:30.177 }, 00:14:30.177 "auth": { 00:14:30.177 "state": "completed", 00:14:30.177 "digest": "sha256", 00:14:30.177 "dhgroup": "ffdhe2048" 00:14:30.177 } 00:14:30.177 } 00:14:30.177 ]' 00:14:30.177 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.177 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.177 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.436 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:30.436 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.436 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.436 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.436 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.695 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:14:30.695 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:14:31.648 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.648 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.648 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.648 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.648 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.648 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.648 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:31.648 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:31.907 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:31.907 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.907 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:31.907 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:31.907 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:31.907 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.907 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:31.907 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.907 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.907 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.907 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:31.907 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:31.907 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:32.165 00:14:32.165 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.165 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.165 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.423 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.423 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.423 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.423 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.423 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.423 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.423 { 00:14:32.423 "cntlid": 15, 00:14:32.423 "qid": 0, 00:14:32.424 "state": "enabled", 00:14:32.424 "thread": "nvmf_tgt_poll_group_000", 00:14:32.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:32.424 "listen_address": { 00:14:32.424 "trtype": "TCP", 00:14:32.424 "adrfam": "IPv4", 00:14:32.424 "traddr": "10.0.0.2", 00:14:32.424 "trsvcid": "4420" 00:14:32.424 }, 00:14:32.424 "peer_address": { 00:14:32.424 "trtype": "TCP", 00:14:32.424 "adrfam": "IPv4", 00:14:32.424 "traddr": "10.0.0.1", 00:14:32.424 "trsvcid": "33092" 00:14:32.424 }, 00:14:32.424 "auth": { 00:14:32.424 "state": "completed", 00:14:32.424 "digest": "sha256", 00:14:32.424 "dhgroup": "ffdhe2048" 00:14:32.424 } 00:14:32.424 } 00:14:32.424 ]' 00:14:32.424 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.424 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.424 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.424 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:32.424 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.424 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.424 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.424 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.992 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:14:32.992 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:14:33.559 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.817 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.817 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.817 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.817 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.817 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:33.817 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.817 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:33.817 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.075 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:34.075 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.075 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:34.075 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:34.075 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:34.075 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.075 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.075 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.075 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.075 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.075 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.075 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.075 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.334 00:14:34.334 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.334 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.334 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.592 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.593 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.593 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.593 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.593 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.593 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.593 { 00:14:34.593 "cntlid": 17, 00:14:34.593 "qid": 0, 00:14:34.593 "state": "enabled", 00:14:34.593 "thread": "nvmf_tgt_poll_group_000", 00:14:34.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:34.593 "listen_address": { 00:14:34.593 "trtype": "TCP", 00:14:34.593 "adrfam": "IPv4", 00:14:34.593 "traddr": "10.0.0.2", 00:14:34.593 "trsvcid": "4420" 00:14:34.593 }, 00:14:34.593 "peer_address": { 00:14:34.593 "trtype": "TCP", 00:14:34.593 "adrfam": "IPv4", 00:14:34.593 "traddr": "10.0.0.1", 00:14:34.593 "trsvcid": "33110" 00:14:34.593 }, 00:14:34.593 "auth": { 00:14:34.593 "state": "completed", 00:14:34.593 "digest": "sha256", 00:14:34.593 "dhgroup": "ffdhe3072" 00:14:34.593 } 00:14:34.593 } 00:14:34.593 ]' 00:14:34.593 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.593 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.593 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.593 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:34.593 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.850 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.850 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.850 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.108 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:14:35.108 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.045 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.303 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.561 00:14:36.561 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.562 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.562 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.820 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.820 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.820 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.820 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.820 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.820 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.820 { 00:14:36.820 "cntlid": 19, 00:14:36.820 "qid": 0, 00:14:36.820 "state": "enabled", 00:14:36.820 "thread": "nvmf_tgt_poll_group_000", 00:14:36.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:36.820 "listen_address": { 00:14:36.820 "trtype": "TCP", 00:14:36.820 "adrfam": "IPv4", 00:14:36.820 "traddr": "10.0.0.2", 00:14:36.820 "trsvcid": "4420" 00:14:36.820 }, 00:14:36.820 "peer_address": { 00:14:36.820 "trtype": "TCP", 00:14:36.820 "adrfam": "IPv4", 00:14:36.820 "traddr": "10.0.0.1", 00:14:36.820 "trsvcid": "33144" 00:14:36.820 }, 00:14:36.820 "auth": { 00:14:36.820 "state": "completed", 00:14:36.820 "digest": "sha256", 00:14:36.820 "dhgroup": "ffdhe3072" 00:14:36.820 } 00:14:36.820 } 00:14:36.820 ]' 00:14:36.820 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.820 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.820 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.820 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:36.820 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.820 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.820 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.820 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.387 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:14:37.387 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.325 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.584 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.584 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.584 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.584 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.842 00:14:38.842 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.842 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.842 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.100 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.100 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.100 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.100 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.100 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.100 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.100 { 00:14:39.100 "cntlid": 21, 00:14:39.100 "qid": 0, 00:14:39.100 "state": "enabled", 00:14:39.100 "thread": "nvmf_tgt_poll_group_000", 00:14:39.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:39.100 "listen_address": { 00:14:39.100 "trtype": "TCP", 00:14:39.100 "adrfam": "IPv4", 00:14:39.100 "traddr": "10.0.0.2", 00:14:39.100 "trsvcid": "4420" 00:14:39.100 }, 00:14:39.100 "peer_address": { 00:14:39.100 "trtype": "TCP", 00:14:39.100 "adrfam": "IPv4", 00:14:39.100 "traddr": "10.0.0.1", 00:14:39.100 "trsvcid": "37150" 00:14:39.100 }, 00:14:39.100 "auth": { 00:14:39.100 "state": "completed", 00:14:39.100 "digest": "sha256", 00:14:39.100 "dhgroup": "ffdhe3072" 00:14:39.100 } 00:14:39.100 } 00:14:39.100 ]' 00:14:39.100 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.100 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.100 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.100 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:39.100 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.100 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.100 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.100 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.669 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:14:39.669 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:14:40.234 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.234 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:40.234 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.234 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.492 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.492 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.492 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:40.492 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:40.755 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:40.755 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.755 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:40.755 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:40.755 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:40.755 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.755 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:40.755 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.755 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.755 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.755 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:40.755 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:40.755 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:41.014 00:14:41.014 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.014 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.014 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.272 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.272 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.272 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.272 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.272 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.272 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.272 { 00:14:41.272 "cntlid": 23, 00:14:41.272 "qid": 0, 00:14:41.272 "state": "enabled", 00:14:41.272 "thread": "nvmf_tgt_poll_group_000", 00:14:41.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:41.272 "listen_address": { 00:14:41.272 "trtype": "TCP", 00:14:41.272 "adrfam": "IPv4", 00:14:41.272 "traddr": "10.0.0.2", 00:14:41.272 "trsvcid": "4420" 00:14:41.272 }, 00:14:41.272 "peer_address": { 00:14:41.272 "trtype": "TCP", 00:14:41.272 "adrfam": "IPv4", 00:14:41.272 "traddr": "10.0.0.1", 00:14:41.272 "trsvcid": "37176" 00:14:41.272 }, 00:14:41.272 "auth": { 00:14:41.272 "state": "completed", 00:14:41.272 "digest": "sha256", 00:14:41.272 "dhgroup": "ffdhe3072" 00:14:41.272 } 00:14:41.272 } 00:14:41.272 ]' 00:14:41.272 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.272 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.272 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.530 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:41.530 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.530 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.530 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.530 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.787 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:14:41.787 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:14:42.723 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.723 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.723 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.723 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.723 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.723 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:42.723 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.723 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:42.723 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:42.981 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:42.981 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.981 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:42.981 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:42.981 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:42.981 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.981 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.981 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.981 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.981 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.981 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.981 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.981 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.549 00:14:43.549 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.549 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.549 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.807 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.807 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.807 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.807 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.807 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.807 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.807 { 00:14:43.807 "cntlid": 25, 00:14:43.807 "qid": 0, 00:14:43.807 "state": "enabled", 00:14:43.807 "thread": "nvmf_tgt_poll_group_000", 00:14:43.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:43.807 "listen_address": { 00:14:43.807 "trtype": "TCP", 00:14:43.807 "adrfam": "IPv4", 00:14:43.807 "traddr": "10.0.0.2", 00:14:43.807 "trsvcid": "4420" 00:14:43.807 }, 00:14:43.807 "peer_address": { 00:14:43.807 "trtype": "TCP", 00:14:43.807 "adrfam": "IPv4", 00:14:43.807 "traddr": "10.0.0.1", 00:14:43.807 "trsvcid": "37218" 00:14:43.807 }, 00:14:43.807 "auth": { 00:14:43.807 "state": "completed", 00:14:43.807 "digest": "sha256", 00:14:43.807 "dhgroup": "ffdhe4096" 00:14:43.807 } 00:14:43.807 } 00:14:43.807 ]' 00:14:43.807 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.807 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.807 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.807 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:43.807 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.807 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.807 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.807 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.066 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:14:44.066 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:14:45.002 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.002 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:45.002 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.002 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.002 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.002 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.002 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:45.002 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:45.260 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:45.260 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.260 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:45.260 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:45.260 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:45.260 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.260 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.260 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.260 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.260 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.260 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.260 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.260 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.827 00:14:45.827 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.827 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.827 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.085 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.085 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.085 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.085 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.085 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.085 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.085 { 00:14:46.085 "cntlid": 27, 00:14:46.085 "qid": 0, 00:14:46.085 "state": "enabled", 00:14:46.085 "thread": "nvmf_tgt_poll_group_000", 00:14:46.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:46.085 "listen_address": { 00:14:46.085 "trtype": "TCP", 00:14:46.085 "adrfam": "IPv4", 00:14:46.085 "traddr": "10.0.0.2", 00:14:46.085 "trsvcid": "4420" 00:14:46.085 }, 00:14:46.085 "peer_address": { 00:14:46.085 "trtype": "TCP", 00:14:46.085 "adrfam": "IPv4", 00:14:46.085 "traddr": "10.0.0.1", 00:14:46.085 "trsvcid": "37242" 00:14:46.085 }, 00:14:46.085 "auth": { 00:14:46.085 "state": "completed", 00:14:46.085 "digest": "sha256", 00:14:46.085 "dhgroup": "ffdhe4096" 00:14:46.085 } 00:14:46.085 } 00:14:46.085 ]' 00:14:46.085 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.085 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.085 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.085 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:46.085 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.085 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.085 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.085 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.344 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:14:46.344 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:14:47.277 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.277 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.277 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.277 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.277 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.277 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.277 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:47.277 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:47.535 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:47.535 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.535 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:47.535 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:47.535 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:47.535 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.535 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.535 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.535 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.535 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.535 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.535 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.535 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.104 00:14:48.104 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.104 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.104 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.104 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.104 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.104 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.104 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.104 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.104 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.104 { 00:14:48.104 "cntlid": 29, 00:14:48.104 "qid": 0, 00:14:48.104 "state": "enabled", 00:14:48.104 "thread": "nvmf_tgt_poll_group_000", 00:14:48.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:48.104 "listen_address": { 00:14:48.104 "trtype": "TCP", 00:14:48.104 "adrfam": "IPv4", 00:14:48.104 "traddr": "10.0.0.2", 00:14:48.104 "trsvcid": "4420" 00:14:48.104 }, 00:14:48.104 "peer_address": { 00:14:48.104 "trtype": "TCP", 00:14:48.104 "adrfam": "IPv4", 00:14:48.104 "traddr": "10.0.0.1", 00:14:48.104 "trsvcid": "37274" 00:14:48.104 }, 00:14:48.104 "auth": { 00:14:48.104 "state": "completed", 00:14:48.104 "digest": "sha256", 00:14:48.104 "dhgroup": "ffdhe4096" 00:14:48.104 } 00:14:48.104 } 00:14:48.104 ]' 00:14:48.104 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.362 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.362 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.362 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:48.362 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.362 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.362 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.362 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.620 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:14:48.620 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:14:49.557 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.557 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.557 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.557 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.557 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.557 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.557 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:49.557 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:49.815 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:49.815 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.815 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:49.815 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:49.815 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:49.815 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.815 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:49.815 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.815 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.815 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.815 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:49.815 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:49.815 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.384 00:14:50.384 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.384 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.384 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.642 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.642 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.642 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.642 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.642 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.642 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.642 { 00:14:50.642 "cntlid": 31, 00:14:50.642 "qid": 0, 00:14:50.642 "state": "enabled", 00:14:50.642 "thread": "nvmf_tgt_poll_group_000", 00:14:50.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:50.642 "listen_address": { 00:14:50.642 "trtype": "TCP", 00:14:50.642 "adrfam": "IPv4", 00:14:50.642 "traddr": "10.0.0.2", 00:14:50.642 "trsvcid": "4420" 00:14:50.642 }, 00:14:50.642 "peer_address": { 00:14:50.642 "trtype": "TCP", 00:14:50.642 "adrfam": "IPv4", 00:14:50.642 "traddr": "10.0.0.1", 00:14:50.642 "trsvcid": "55122" 00:14:50.642 }, 00:14:50.642 "auth": { 00:14:50.642 "state": "completed", 00:14:50.642 "digest": "sha256", 00:14:50.642 "dhgroup": "ffdhe4096" 00:14:50.642 } 00:14:50.642 } 00:14:50.642 ]' 00:14:50.642 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.642 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.642 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.642 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:50.642 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.642 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.643 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.643 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.899 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:14:50.899 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:14:51.832 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.832 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:51.832 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.832 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.832 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.832 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.832 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.832 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:51.832 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:52.090 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:52.090 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.090 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:52.090 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:52.090 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:52.090 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.090 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.090 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.090 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.090 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.090 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.090 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.090 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.661 00:14:52.661 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.661 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.661 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.919 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.919 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.919 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.919 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.919 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.919 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.919 { 00:14:52.919 "cntlid": 33, 00:14:52.919 "qid": 0, 00:14:52.919 "state": "enabled", 00:14:52.919 "thread": "nvmf_tgt_poll_group_000", 00:14:52.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:52.919 "listen_address": { 00:14:52.919 "trtype": "TCP", 00:14:52.919 "adrfam": "IPv4", 00:14:52.919 "traddr": "10.0.0.2", 00:14:52.919 "trsvcid": "4420" 00:14:52.919 }, 00:14:52.919 "peer_address": { 00:14:52.919 "trtype": "TCP", 00:14:52.919 "adrfam": "IPv4", 00:14:52.919 "traddr": "10.0.0.1", 00:14:52.919 "trsvcid": "55152" 00:14:52.919 }, 00:14:52.919 "auth": { 00:14:52.919 "state": "completed", 00:14:52.919 "digest": "sha256", 00:14:52.919 "dhgroup": "ffdhe6144" 00:14:52.919 } 00:14:52.919 } 00:14:52.919 ]' 00:14:52.919 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.919 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.919 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.919 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:52.919 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.919 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.919 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.919 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.178 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:14:53.178 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:14:54.153 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.153 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.153 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.153 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.153 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.153 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.153 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:54.153 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:54.411 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:54.411 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.411 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:54.411 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:54.411 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:54.411 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.411 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.411 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.411 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.411 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.411 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.411 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.411 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.979 00:14:54.979 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.979 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.979 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.546 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.546 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.546 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.546 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.546 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.546 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.546 { 00:14:55.546 "cntlid": 35, 00:14:55.546 "qid": 0, 00:14:55.546 "state": "enabled", 00:14:55.546 "thread": "nvmf_tgt_poll_group_000", 00:14:55.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:55.546 "listen_address": { 00:14:55.546 "trtype": "TCP", 00:14:55.546 "adrfam": "IPv4", 00:14:55.546 "traddr": "10.0.0.2", 00:14:55.546 "trsvcid": "4420" 00:14:55.546 }, 00:14:55.546 "peer_address": { 00:14:55.546 "trtype": "TCP", 00:14:55.546 "adrfam": "IPv4", 00:14:55.546 "traddr": "10.0.0.1", 00:14:55.546 "trsvcid": "55164" 00:14:55.546 }, 00:14:55.546 "auth": { 00:14:55.546 "state": "completed", 00:14:55.546 "digest": "sha256", 00:14:55.546 "dhgroup": "ffdhe6144" 00:14:55.546 } 00:14:55.546 } 00:14:55.546 ]' 00:14:55.546 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.546 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.546 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.546 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:55.546 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.546 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.546 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.547 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.804 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:14:55.804 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:14:56.742 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.742 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.742 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.742 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.742 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.742 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.742 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:56.742 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:57.000 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:57.000 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.000 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:57.000 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:57.000 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:57.000 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.000 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.000 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.000 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.000 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.000 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.000 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.000 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.565 00:14:57.565 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.565 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.565 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.824 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.824 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.824 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.824 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.824 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.824 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.824 { 00:14:57.824 "cntlid": 37, 00:14:57.824 "qid": 0, 00:14:57.824 "state": "enabled", 00:14:57.824 "thread": "nvmf_tgt_poll_group_000", 00:14:57.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:57.824 "listen_address": { 00:14:57.824 "trtype": "TCP", 00:14:57.824 "adrfam": "IPv4", 00:14:57.824 "traddr": "10.0.0.2", 00:14:57.824 "trsvcid": "4420" 00:14:57.824 }, 00:14:57.824 "peer_address": { 00:14:57.824 "trtype": "TCP", 00:14:57.824 "adrfam": "IPv4", 00:14:57.824 "traddr": "10.0.0.1", 00:14:57.824 "trsvcid": "55182" 00:14:57.824 }, 00:14:57.824 "auth": { 00:14:57.824 "state": "completed", 00:14:57.824 "digest": "sha256", 00:14:57.824 "dhgroup": "ffdhe6144" 00:14:57.824 } 00:14:57.824 } 00:14:57.824 ]' 00:14:57.824 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.824 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.824 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.824 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:57.824 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.824 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.824 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.824 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.083 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:14:58.083 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:14:59.018 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.018 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.018 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.018 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.018 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.018 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.018 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:59.018 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:59.278 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:59.278 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.278 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:59.278 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:59.278 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:59.278 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.278 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:59.278 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.278 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.537 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.537 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:59.537 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.537 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:00.105 00:15:00.105 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.105 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.105 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.105 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.105 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.105 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.105 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.363 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.363 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.363 { 00:15:00.363 "cntlid": 39, 00:15:00.363 "qid": 0, 00:15:00.363 "state": "enabled", 00:15:00.363 "thread": "nvmf_tgt_poll_group_000", 00:15:00.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:00.363 "listen_address": { 00:15:00.363 "trtype": "TCP", 00:15:00.363 "adrfam": "IPv4", 00:15:00.363 "traddr": "10.0.0.2", 00:15:00.363 "trsvcid": "4420" 00:15:00.363 }, 00:15:00.363 "peer_address": { 00:15:00.363 "trtype": "TCP", 00:15:00.363 "adrfam": "IPv4", 00:15:00.363 "traddr": "10.0.0.1", 00:15:00.363 "trsvcid": "54730" 00:15:00.363 }, 00:15:00.363 "auth": { 00:15:00.363 "state": "completed", 00:15:00.363 "digest": "sha256", 00:15:00.363 "dhgroup": "ffdhe6144" 00:15:00.363 } 00:15:00.363 } 00:15:00.363 ]' 00:15:00.363 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.363 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.363 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.363 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:00.363 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.363 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.363 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.363 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.621 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:15:00.621 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:15:01.568 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.568 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:01.568 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.568 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.568 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.568 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.568 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.568 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.568 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.826 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:01.826 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.826 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.826 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:01.826 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:01.826 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.826 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.826 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.826 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.826 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.826 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.826 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.826 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.765 00:15:02.765 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.765 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.765 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.023 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.023 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.023 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.023 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.023 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.023 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.023 { 00:15:03.023 "cntlid": 41, 00:15:03.023 "qid": 0, 00:15:03.023 "state": "enabled", 00:15:03.023 "thread": "nvmf_tgt_poll_group_000", 00:15:03.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:03.023 "listen_address": { 00:15:03.023 "trtype": "TCP", 00:15:03.023 "adrfam": "IPv4", 00:15:03.023 "traddr": "10.0.0.2", 00:15:03.023 "trsvcid": "4420" 00:15:03.023 }, 00:15:03.023 "peer_address": { 00:15:03.023 "trtype": "TCP", 00:15:03.023 "adrfam": "IPv4", 00:15:03.023 "traddr": "10.0.0.1", 00:15:03.023 "trsvcid": "54764" 00:15:03.023 }, 00:15:03.023 "auth": { 00:15:03.023 "state": "completed", 00:15:03.023 "digest": "sha256", 00:15:03.023 "dhgroup": "ffdhe8192" 00:15:03.023 } 00:15:03.023 } 00:15:03.023 ]' 00:15:03.023 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.023 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.023 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.023 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:03.023 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.023 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.023 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.023 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.282 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:15:03.282 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:15:04.221 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.221 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.221 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.221 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.221 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.221 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.221 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.221 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.479 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:04.479 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.479 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.479 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:04.479 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:04.479 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.479 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.479 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.479 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.479 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.479 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.479 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.479 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.415 00:15:05.415 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.415 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.415 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.673 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.673 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.673 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.673 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.673 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.673 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.673 { 00:15:05.673 "cntlid": 43, 00:15:05.673 "qid": 0, 00:15:05.673 "state": "enabled", 00:15:05.673 "thread": "nvmf_tgt_poll_group_000", 00:15:05.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:05.673 "listen_address": { 00:15:05.673 "trtype": "TCP", 00:15:05.673 "adrfam": "IPv4", 00:15:05.673 "traddr": "10.0.0.2", 00:15:05.673 "trsvcid": "4420" 00:15:05.673 }, 00:15:05.673 "peer_address": { 00:15:05.673 "trtype": "TCP", 00:15:05.673 "adrfam": "IPv4", 00:15:05.673 "traddr": "10.0.0.1", 00:15:05.673 "trsvcid": "54792" 00:15:05.673 }, 00:15:05.673 "auth": { 00:15:05.673 "state": "completed", 00:15:05.673 "digest": "sha256", 00:15:05.673 "dhgroup": "ffdhe8192" 00:15:05.673 } 00:15:05.673 } 00:15:05.673 ]' 00:15:05.673 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.673 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.673 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.673 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:05.673 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.673 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.673 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.673 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.241 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:15:06.241 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.176 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.109 00:15:08.109 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.109 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.109 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.367 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.367 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.367 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.367 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.367 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.367 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.367 { 00:15:08.367 "cntlid": 45, 00:15:08.367 "qid": 0, 00:15:08.367 "state": "enabled", 00:15:08.367 "thread": "nvmf_tgt_poll_group_000", 00:15:08.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:08.367 "listen_address": { 00:15:08.367 "trtype": "TCP", 00:15:08.367 "adrfam": "IPv4", 00:15:08.367 "traddr": "10.0.0.2", 00:15:08.367 "trsvcid": "4420" 00:15:08.367 }, 00:15:08.367 "peer_address": { 00:15:08.367 "trtype": "TCP", 00:15:08.367 "adrfam": "IPv4", 00:15:08.367 "traddr": "10.0.0.1", 00:15:08.367 "trsvcid": "54836" 00:15:08.367 }, 00:15:08.367 "auth": { 00:15:08.367 "state": "completed", 00:15:08.367 "digest": "sha256", 00:15:08.367 "dhgroup": "ffdhe8192" 00:15:08.367 } 00:15:08.367 } 00:15:08.367 ]' 00:15:08.367 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.367 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.367 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.625 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:08.625 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.625 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.625 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.625 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.883 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:15:08.883 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:15:09.818 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.818 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.818 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.818 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.818 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.818 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.818 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:09.818 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:10.076 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:10.076 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.076 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.076 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:10.076 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:10.076 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.076 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:10.076 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.076 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.076 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.076 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:10.076 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.076 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.013 00:15:11.013 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.013 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.013 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.013 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.013 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.013 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.013 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.013 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.013 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.013 { 00:15:11.013 "cntlid": 47, 00:15:11.013 "qid": 0, 00:15:11.013 "state": "enabled", 00:15:11.013 "thread": "nvmf_tgt_poll_group_000", 00:15:11.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:11.013 "listen_address": { 00:15:11.013 "trtype": "TCP", 00:15:11.013 "adrfam": "IPv4", 00:15:11.013 "traddr": "10.0.0.2", 00:15:11.013 "trsvcid": "4420" 00:15:11.013 }, 00:15:11.013 "peer_address": { 00:15:11.013 "trtype": "TCP", 00:15:11.013 "adrfam": "IPv4", 00:15:11.013 "traddr": "10.0.0.1", 00:15:11.013 "trsvcid": "60508" 00:15:11.013 }, 00:15:11.014 "auth": { 00:15:11.014 "state": "completed", 00:15:11.014 "digest": "sha256", 00:15:11.014 "dhgroup": "ffdhe8192" 00:15:11.014 } 00:15:11.014 } 00:15:11.014 ]' 00:15:11.014 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.272 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.272 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.272 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:11.272 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.272 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.272 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.272 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.530 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:15:11.530 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:15:12.465 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.465 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:12.465 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.465 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.465 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.465 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:12.465 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:12.465 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.465 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:12.465 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:12.723 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:12.723 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.723 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:12.723 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:12.723 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:12.723 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.723 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.723 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.723 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.723 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.723 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.723 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.723 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.290 00:15:13.290 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.290 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.290 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.548 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.548 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.548 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.548 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.548 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.548 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.548 { 00:15:13.548 "cntlid": 49, 00:15:13.548 "qid": 0, 00:15:13.548 "state": "enabled", 00:15:13.548 "thread": "nvmf_tgt_poll_group_000", 00:15:13.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:13.548 "listen_address": { 00:15:13.548 "trtype": "TCP", 00:15:13.548 "adrfam": "IPv4", 00:15:13.548 "traddr": "10.0.0.2", 00:15:13.548 "trsvcid": "4420" 00:15:13.548 }, 00:15:13.548 "peer_address": { 00:15:13.548 "trtype": "TCP", 00:15:13.548 "adrfam": "IPv4", 00:15:13.548 "traddr": "10.0.0.1", 00:15:13.548 "trsvcid": "60536" 00:15:13.548 }, 00:15:13.548 "auth": { 00:15:13.548 "state": "completed", 00:15:13.548 "digest": "sha384", 00:15:13.548 "dhgroup": "null" 00:15:13.548 } 00:15:13.548 } 00:15:13.548 ]' 00:15:13.548 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.548 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.548 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.548 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:13.548 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.548 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.548 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.548 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.806 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:15:13.806 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:15:14.740 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.740 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.740 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.740 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.740 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.740 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.740 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:14.740 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:14.998 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:14.998 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.998 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:14.998 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:14.998 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:14.998 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.998 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.998 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.998 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.998 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.998 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.998 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.998 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.566 00:15:15.566 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.566 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.566 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.566 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.566 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.566 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.566 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.566 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.566 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.566 { 00:15:15.566 "cntlid": 51, 00:15:15.566 "qid": 0, 00:15:15.566 "state": "enabled", 00:15:15.566 "thread": "nvmf_tgt_poll_group_000", 00:15:15.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:15.566 "listen_address": { 00:15:15.566 "trtype": "TCP", 00:15:15.566 "adrfam": "IPv4", 00:15:15.566 "traddr": "10.0.0.2", 00:15:15.566 "trsvcid": "4420" 00:15:15.566 }, 00:15:15.566 "peer_address": { 00:15:15.566 "trtype": "TCP", 00:15:15.566 "adrfam": "IPv4", 00:15:15.566 "traddr": "10.0.0.1", 00:15:15.566 "trsvcid": "60564" 00:15:15.566 }, 00:15:15.566 "auth": { 00:15:15.566 "state": "completed", 00:15:15.566 "digest": "sha384", 00:15:15.566 "dhgroup": "null" 00:15:15.566 } 00:15:15.566 } 00:15:15.566 ]' 00:15:15.566 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.824 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.824 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.824 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:15.824 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.824 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.824 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.824 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.083 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:15:16.083 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:15:17.017 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.017 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.017 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.017 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.017 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.017 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.017 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:17.017 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:17.274 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:17.274 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.274 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:17.274 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:17.274 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:17.274 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.274 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.274 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.274 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.274 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.274 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.274 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.274 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.532 00:15:17.532 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.532 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.532 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.790 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.790 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.790 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.790 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.790 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.790 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.790 { 00:15:17.790 "cntlid": 53, 00:15:17.790 "qid": 0, 00:15:17.790 "state": "enabled", 00:15:17.790 "thread": "nvmf_tgt_poll_group_000", 00:15:17.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:17.790 "listen_address": { 00:15:17.790 "trtype": "TCP", 00:15:17.790 "adrfam": "IPv4", 00:15:17.790 "traddr": "10.0.0.2", 00:15:17.790 "trsvcid": "4420" 00:15:17.790 }, 00:15:17.790 "peer_address": { 00:15:17.790 "trtype": "TCP", 00:15:17.790 "adrfam": "IPv4", 00:15:17.790 "traddr": "10.0.0.1", 00:15:17.790 "trsvcid": "60588" 00:15:17.790 }, 00:15:17.790 "auth": { 00:15:17.790 "state": "completed", 00:15:17.790 "digest": "sha384", 00:15:17.790 "dhgroup": "null" 00:15:17.790 } 00:15:17.790 } 00:15:17.790 ]' 00:15:17.790 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.790 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.790 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.048 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:18.048 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.048 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.048 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.048 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.306 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:15:18.306 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:15:19.245 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.245 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.245 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.245 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.245 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.245 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.245 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:19.245 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:19.503 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:19.503 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.503 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:19.503 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:19.503 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:19.503 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.503 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:19.503 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.503 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.503 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.503 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:19.503 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.503 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.761 00:15:19.761 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.761 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.761 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.019 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.019 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.019 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.019 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.019 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.019 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.019 { 00:15:20.019 "cntlid": 55, 00:15:20.019 "qid": 0, 00:15:20.019 "state": "enabled", 00:15:20.019 "thread": "nvmf_tgt_poll_group_000", 00:15:20.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:20.019 "listen_address": { 00:15:20.019 "trtype": "TCP", 00:15:20.019 "adrfam": "IPv4", 00:15:20.019 "traddr": "10.0.0.2", 00:15:20.019 "trsvcid": "4420" 00:15:20.019 }, 00:15:20.019 "peer_address": { 00:15:20.019 "trtype": "TCP", 00:15:20.019 "adrfam": "IPv4", 00:15:20.019 "traddr": "10.0.0.1", 00:15:20.019 "trsvcid": "44576" 00:15:20.019 }, 00:15:20.019 "auth": { 00:15:20.019 "state": "completed", 00:15:20.019 "digest": "sha384", 00:15:20.019 "dhgroup": "null" 00:15:20.019 } 00:15:20.019 } 00:15:20.019 ]' 00:15:20.019 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.019 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.019 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.019 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:20.019 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.019 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.019 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.019 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.586 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:15:20.586 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.520 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.086 00:15:22.086 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.086 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.086 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.344 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.344 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.344 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.344 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.344 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.344 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.344 { 00:15:22.344 "cntlid": 57, 00:15:22.344 "qid": 0, 00:15:22.344 "state": "enabled", 00:15:22.344 "thread": "nvmf_tgt_poll_group_000", 00:15:22.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:22.344 "listen_address": { 00:15:22.344 "trtype": "TCP", 00:15:22.344 "adrfam": "IPv4", 00:15:22.344 "traddr": "10.0.0.2", 00:15:22.344 "trsvcid": "4420" 00:15:22.344 }, 00:15:22.344 "peer_address": { 00:15:22.344 "trtype": "TCP", 00:15:22.344 "adrfam": "IPv4", 00:15:22.344 "traddr": "10.0.0.1", 00:15:22.344 "trsvcid": "44616" 00:15:22.344 }, 00:15:22.344 "auth": { 00:15:22.344 "state": "completed", 00:15:22.344 "digest": "sha384", 00:15:22.344 "dhgroup": "ffdhe2048" 00:15:22.344 } 00:15:22.344 } 00:15:22.344 ]' 00:15:22.344 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.344 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.344 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.344 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:22.344 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.344 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.344 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.344 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.602 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:15:22.603 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:15:23.589 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.589 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.589 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.589 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.590 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.590 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.590 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:23.590 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:23.848 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:23.848 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.848 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:23.848 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:23.848 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:23.848 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.848 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.848 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.848 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.848 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.848 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.848 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.848 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.107 00:15:24.107 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.107 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.107 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.676 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.676 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.676 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.676 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.676 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.676 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.676 { 00:15:24.676 "cntlid": 59, 00:15:24.676 "qid": 0, 00:15:24.676 "state": "enabled", 00:15:24.676 "thread": "nvmf_tgt_poll_group_000", 00:15:24.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:24.676 "listen_address": { 00:15:24.676 "trtype": "TCP", 00:15:24.676 "adrfam": "IPv4", 00:15:24.676 "traddr": "10.0.0.2", 00:15:24.676 "trsvcid": "4420" 00:15:24.676 }, 00:15:24.676 "peer_address": { 00:15:24.676 "trtype": "TCP", 00:15:24.676 "adrfam": "IPv4", 00:15:24.676 "traddr": "10.0.0.1", 00:15:24.676 "trsvcid": "44634" 00:15:24.676 }, 00:15:24.676 "auth": { 00:15:24.676 "state": "completed", 00:15:24.676 "digest": "sha384", 00:15:24.676 "dhgroup": "ffdhe2048" 00:15:24.676 } 00:15:24.676 } 00:15:24.676 ]' 00:15:24.676 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.676 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.676 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.676 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:24.676 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.676 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.676 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.676 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.934 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:15:24.934 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:15:25.868 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.868 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:25.868 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.868 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.868 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.868 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.868 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:25.868 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:26.126 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:26.126 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.126 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:26.126 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:26.126 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:26.126 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.126 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.126 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.126 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.126 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.126 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.126 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.126 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.385 00:15:26.385 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.385 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.385 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.643 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.643 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.643 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.643 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.643 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.643 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.643 { 00:15:26.643 "cntlid": 61, 00:15:26.643 "qid": 0, 00:15:26.643 "state": "enabled", 00:15:26.643 "thread": "nvmf_tgt_poll_group_000", 00:15:26.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:26.643 "listen_address": { 00:15:26.643 "trtype": "TCP", 00:15:26.643 "adrfam": "IPv4", 00:15:26.643 "traddr": "10.0.0.2", 00:15:26.643 "trsvcid": "4420" 00:15:26.643 }, 00:15:26.643 "peer_address": { 00:15:26.643 "trtype": "TCP", 00:15:26.643 "adrfam": "IPv4", 00:15:26.643 "traddr": "10.0.0.1", 00:15:26.643 "trsvcid": "44666" 00:15:26.643 }, 00:15:26.643 "auth": { 00:15:26.643 "state": "completed", 00:15:26.643 "digest": "sha384", 00:15:26.643 "dhgroup": "ffdhe2048" 00:15:26.643 } 00:15:26.643 } 00:15:26.643 ]' 00:15:26.643 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.643 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.643 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.901 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:26.901 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.901 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.901 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.901 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.160 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:15:27.160 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:15:28.098 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.098 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.098 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.098 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.098 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.098 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.098 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:28.098 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:28.357 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:28.357 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.357 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:28.357 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:28.357 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:28.357 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.357 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:28.357 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.357 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.357 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.357 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:28.357 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.357 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.615 00:15:28.615 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.615 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.615 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.873 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.873 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.873 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.873 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.873 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.873 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.873 { 00:15:28.873 "cntlid": 63, 00:15:28.873 "qid": 0, 00:15:28.873 "state": "enabled", 00:15:28.873 "thread": "nvmf_tgt_poll_group_000", 00:15:28.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:28.873 "listen_address": { 00:15:28.873 "trtype": "TCP", 00:15:28.873 "adrfam": "IPv4", 00:15:28.873 "traddr": "10.0.0.2", 00:15:28.873 "trsvcid": "4420" 00:15:28.873 }, 00:15:28.873 "peer_address": { 00:15:28.873 "trtype": "TCP", 00:15:28.873 "adrfam": "IPv4", 00:15:28.873 "traddr": "10.0.0.1", 00:15:28.873 "trsvcid": "50208" 00:15:28.873 }, 00:15:28.873 "auth": { 00:15:28.873 "state": "completed", 00:15:28.873 "digest": "sha384", 00:15:28.873 "dhgroup": "ffdhe2048" 00:15:28.873 } 00:15:28.873 } 00:15:28.873 ]' 00:15:28.873 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.873 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.873 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.873 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:28.873 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.133 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.133 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.133 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.394 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:15:29.394 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:15:30.331 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.331 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:30.331 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.331 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.331 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.331 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:30.331 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.331 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:30.331 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:30.589 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:30.589 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.589 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:30.589 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:30.589 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:30.589 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.589 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.589 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.589 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.589 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.589 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.589 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.589 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.850 00:15:30.850 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.850 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.850 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.108 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.108 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.108 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.108 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.108 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.108 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.108 { 00:15:31.108 "cntlid": 65, 00:15:31.108 "qid": 0, 00:15:31.108 "state": "enabled", 00:15:31.108 "thread": "nvmf_tgt_poll_group_000", 00:15:31.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:31.108 "listen_address": { 00:15:31.108 "trtype": "TCP", 00:15:31.108 "adrfam": "IPv4", 00:15:31.108 "traddr": "10.0.0.2", 00:15:31.108 "trsvcid": "4420" 00:15:31.108 }, 00:15:31.108 "peer_address": { 00:15:31.108 "trtype": "TCP", 00:15:31.108 "adrfam": "IPv4", 00:15:31.108 "traddr": "10.0.0.1", 00:15:31.108 "trsvcid": "50232" 00:15:31.108 }, 00:15:31.108 "auth": { 00:15:31.108 "state": "completed", 00:15:31.108 "digest": "sha384", 00:15:31.108 "dhgroup": "ffdhe3072" 00:15:31.108 } 00:15:31.108 } 00:15:31.108 ]' 00:15:31.108 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.366 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.366 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.366 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:31.366 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.366 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.366 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.366 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.624 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:15:31.624 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:15:32.557 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.557 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.557 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.557 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.557 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.557 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.557 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:32.557 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:32.815 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:32.815 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.815 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:32.815 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:32.815 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:32.815 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.815 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.815 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.815 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.815 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.815 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.815 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.815 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.074 00:15:33.074 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.074 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.074 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.332 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.332 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.332 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.332 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.332 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.332 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.332 { 00:15:33.332 "cntlid": 67, 00:15:33.332 "qid": 0, 00:15:33.332 "state": "enabled", 00:15:33.332 "thread": "nvmf_tgt_poll_group_000", 00:15:33.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:33.332 "listen_address": { 00:15:33.332 "trtype": "TCP", 00:15:33.332 "adrfam": "IPv4", 00:15:33.332 "traddr": "10.0.0.2", 00:15:33.332 "trsvcid": "4420" 00:15:33.332 }, 00:15:33.332 "peer_address": { 00:15:33.332 "trtype": "TCP", 00:15:33.332 "adrfam": "IPv4", 00:15:33.332 "traddr": "10.0.0.1", 00:15:33.332 "trsvcid": "50274" 00:15:33.332 }, 00:15:33.332 "auth": { 00:15:33.332 "state": "completed", 00:15:33.332 "digest": "sha384", 00:15:33.332 "dhgroup": "ffdhe3072" 00:15:33.332 } 00:15:33.332 } 00:15:33.332 ]' 00:15:33.332 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.332 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.332 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.590 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:33.590 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.590 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.590 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.590 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.848 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:15:33.848 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:15:34.787 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.787 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.787 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.787 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.787 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.787 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.787 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:34.787 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:35.046 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:35.046 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.046 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:35.046 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:35.046 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:35.046 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.046 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.046 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.046 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.046 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.046 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.046 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.046 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.304 00:15:35.304 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.304 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.304 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.563 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.563 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.563 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.563 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.563 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.563 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.563 { 00:15:35.563 "cntlid": 69, 00:15:35.563 "qid": 0, 00:15:35.563 "state": "enabled", 00:15:35.563 "thread": "nvmf_tgt_poll_group_000", 00:15:35.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:35.563 "listen_address": { 00:15:35.563 "trtype": "TCP", 00:15:35.563 "adrfam": "IPv4", 00:15:35.563 "traddr": "10.0.0.2", 00:15:35.563 "trsvcid": "4420" 00:15:35.563 }, 00:15:35.563 "peer_address": { 00:15:35.563 "trtype": "TCP", 00:15:35.563 "adrfam": "IPv4", 00:15:35.563 "traddr": "10.0.0.1", 00:15:35.563 "trsvcid": "50310" 00:15:35.563 }, 00:15:35.563 "auth": { 00:15:35.563 "state": "completed", 00:15:35.563 "digest": "sha384", 00:15:35.563 "dhgroup": "ffdhe3072" 00:15:35.563 } 00:15:35.563 } 00:15:35.563 ]' 00:15:35.563 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.563 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.563 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.563 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:35.563 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.563 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.563 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.563 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.133 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:15:36.133 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.068 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.635 00:15:37.635 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.635 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.635 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.893 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.893 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.893 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.893 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.893 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.893 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.893 { 00:15:37.893 "cntlid": 71, 00:15:37.893 "qid": 0, 00:15:37.893 "state": "enabled", 00:15:37.893 "thread": "nvmf_tgt_poll_group_000", 00:15:37.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:37.893 "listen_address": { 00:15:37.893 "trtype": "TCP", 00:15:37.893 "adrfam": "IPv4", 00:15:37.893 "traddr": "10.0.0.2", 00:15:37.893 "trsvcid": "4420" 00:15:37.893 }, 00:15:37.893 "peer_address": { 00:15:37.893 "trtype": "TCP", 00:15:37.893 "adrfam": "IPv4", 00:15:37.893 "traddr": "10.0.0.1", 00:15:37.893 "trsvcid": "50336" 00:15:37.893 }, 00:15:37.893 "auth": { 00:15:37.893 "state": "completed", 00:15:37.893 "digest": "sha384", 00:15:37.893 "dhgroup": "ffdhe3072" 00:15:37.893 } 00:15:37.893 } 00:15:37.893 ]' 00:15:37.893 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.893 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.893 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.894 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:37.894 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.894 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.894 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.894 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.153 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:15:38.153 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:15:39.090 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.090 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:39.090 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.090 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.090 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.090 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.090 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.090 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:39.090 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:39.349 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:39.349 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.349 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:39.349 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:39.349 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:39.349 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.349 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.349 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.349 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.349 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.349 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.349 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.349 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.915 00:15:39.915 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.915 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.915 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.172 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.172 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.172 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.172 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.172 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.172 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.173 { 00:15:40.173 "cntlid": 73, 00:15:40.173 "qid": 0, 00:15:40.173 "state": "enabled", 00:15:40.173 "thread": "nvmf_tgt_poll_group_000", 00:15:40.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:40.173 "listen_address": { 00:15:40.173 "trtype": "TCP", 00:15:40.173 "adrfam": "IPv4", 00:15:40.173 "traddr": "10.0.0.2", 00:15:40.173 "trsvcid": "4420" 00:15:40.173 }, 00:15:40.173 "peer_address": { 00:15:40.173 "trtype": "TCP", 00:15:40.173 "adrfam": "IPv4", 00:15:40.173 "traddr": "10.0.0.1", 00:15:40.173 "trsvcid": "46138" 00:15:40.173 }, 00:15:40.173 "auth": { 00:15:40.173 "state": "completed", 00:15:40.173 "digest": "sha384", 00:15:40.173 "dhgroup": "ffdhe4096" 00:15:40.173 } 00:15:40.173 } 00:15:40.173 ]' 00:15:40.173 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.173 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.173 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.173 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:40.173 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.431 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.431 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.431 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.689 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:15:40.689 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:15:41.623 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.623 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:41.623 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.623 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.623 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.623 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.623 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:41.623 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:41.881 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:41.881 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.881 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:41.881 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:41.881 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:41.881 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.881 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.881 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.881 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.881 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.881 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.881 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.881 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.139 00:15:42.139 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.139 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.139 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.397 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.397 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.397 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.397 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.397 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.397 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.397 { 00:15:42.397 "cntlid": 75, 00:15:42.397 "qid": 0, 00:15:42.397 "state": "enabled", 00:15:42.397 "thread": "nvmf_tgt_poll_group_000", 00:15:42.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:42.397 "listen_address": { 00:15:42.397 "trtype": "TCP", 00:15:42.397 "adrfam": "IPv4", 00:15:42.397 "traddr": "10.0.0.2", 00:15:42.397 "trsvcid": "4420" 00:15:42.397 }, 00:15:42.397 "peer_address": { 00:15:42.397 "trtype": "TCP", 00:15:42.397 "adrfam": "IPv4", 00:15:42.397 "traddr": "10.0.0.1", 00:15:42.397 "trsvcid": "46156" 00:15:42.397 }, 00:15:42.397 "auth": { 00:15:42.397 "state": "completed", 00:15:42.397 "digest": "sha384", 00:15:42.397 "dhgroup": "ffdhe4096" 00:15:42.397 } 00:15:42.397 } 00:15:42.398 ]' 00:15:42.398 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.398 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.398 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.656 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:42.656 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.656 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.656 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.656 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.913 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:15:42.913 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:15:43.848 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.848 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.848 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.848 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.848 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.848 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.848 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:43.848 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:44.106 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:44.106 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.106 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:44.106 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:44.106 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:44.106 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.106 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.106 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.106 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.106 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.106 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.106 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.106 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.364 00:15:44.364 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.364 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.364 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.623 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.623 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.623 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.623 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.623 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.623 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.623 { 00:15:44.623 "cntlid": 77, 00:15:44.623 "qid": 0, 00:15:44.623 "state": "enabled", 00:15:44.623 "thread": "nvmf_tgt_poll_group_000", 00:15:44.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:44.623 "listen_address": { 00:15:44.623 "trtype": "TCP", 00:15:44.623 "adrfam": "IPv4", 00:15:44.623 "traddr": "10.0.0.2", 00:15:44.623 "trsvcid": "4420" 00:15:44.623 }, 00:15:44.623 "peer_address": { 00:15:44.623 "trtype": "TCP", 00:15:44.623 "adrfam": "IPv4", 00:15:44.623 "traddr": "10.0.0.1", 00:15:44.623 "trsvcid": "46186" 00:15:44.623 }, 00:15:44.623 "auth": { 00:15:44.623 "state": "completed", 00:15:44.623 "digest": "sha384", 00:15:44.623 "dhgroup": "ffdhe4096" 00:15:44.623 } 00:15:44.623 } 00:15:44.623 ]' 00:15:44.623 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.881 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.881 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.881 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:44.881 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.881 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.881 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.881 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.139 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:15:45.139 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:15:46.075 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.075 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.075 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.075 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.075 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.075 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.075 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:46.075 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:46.334 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:46.334 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.334 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.334 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:46.334 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:46.334 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.334 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:46.334 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.334 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.334 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.334 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:46.334 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.334 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.592 00:15:46.592 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.592 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.592 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.850 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.850 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.850 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.850 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.850 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.850 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.850 { 00:15:46.850 "cntlid": 79, 00:15:46.850 "qid": 0, 00:15:46.850 "state": "enabled", 00:15:46.850 "thread": "nvmf_tgt_poll_group_000", 00:15:46.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:46.850 "listen_address": { 00:15:46.850 "trtype": "TCP", 00:15:46.850 "adrfam": "IPv4", 00:15:46.850 "traddr": "10.0.0.2", 00:15:46.850 "trsvcid": "4420" 00:15:46.850 }, 00:15:46.850 "peer_address": { 00:15:46.850 "trtype": "TCP", 00:15:46.850 "adrfam": "IPv4", 00:15:46.850 "traddr": "10.0.0.1", 00:15:46.850 "trsvcid": "46206" 00:15:46.850 }, 00:15:46.850 "auth": { 00:15:46.850 "state": "completed", 00:15:46.850 "digest": "sha384", 00:15:46.850 "dhgroup": "ffdhe4096" 00:15:46.850 } 00:15:46.850 } 00:15:46.850 ]' 00:15:46.850 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.108 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.108 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.108 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:47.108 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.108 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.108 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.108 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.366 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:15:47.366 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:15:48.302 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.302 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.302 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.302 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.302 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.302 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.302 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.302 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:48.302 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:48.560 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:48.560 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.560 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.560 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:48.560 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:48.560 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.560 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.560 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.560 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.560 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.560 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.560 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.560 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.126 00:15:49.126 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.126 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.126 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.384 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.384 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.384 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.384 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.384 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.384 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.384 { 00:15:49.384 "cntlid": 81, 00:15:49.384 "qid": 0, 00:15:49.384 "state": "enabled", 00:15:49.384 "thread": "nvmf_tgt_poll_group_000", 00:15:49.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:49.384 "listen_address": { 00:15:49.384 "trtype": "TCP", 00:15:49.384 "adrfam": "IPv4", 00:15:49.384 "traddr": "10.0.0.2", 00:15:49.384 "trsvcid": "4420" 00:15:49.384 }, 00:15:49.384 "peer_address": { 00:15:49.384 "trtype": "TCP", 00:15:49.384 "adrfam": "IPv4", 00:15:49.384 "traddr": "10.0.0.1", 00:15:49.384 "trsvcid": "59332" 00:15:49.384 }, 00:15:49.384 "auth": { 00:15:49.384 "state": "completed", 00:15:49.384 "digest": "sha384", 00:15:49.384 "dhgroup": "ffdhe6144" 00:15:49.384 } 00:15:49.384 } 00:15:49.384 ]' 00:15:49.384 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.384 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.384 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.384 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:49.384 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.384 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.384 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.384 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.642 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:15:49.642 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:15:50.576 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.576 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.576 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.576 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.576 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.576 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.576 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:50.576 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:50.835 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:50.835 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.835 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.835 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:50.835 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:50.835 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.835 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.835 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.835 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.835 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.835 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.835 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.835 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.399 00:15:51.399 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.399 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.399 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.657 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.657 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.657 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.657 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.657 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.657 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.657 { 00:15:51.657 "cntlid": 83, 00:15:51.657 "qid": 0, 00:15:51.657 "state": "enabled", 00:15:51.657 "thread": "nvmf_tgt_poll_group_000", 00:15:51.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:51.657 "listen_address": { 00:15:51.657 "trtype": "TCP", 00:15:51.657 "adrfam": "IPv4", 00:15:51.657 "traddr": "10.0.0.2", 00:15:51.657 "trsvcid": "4420" 00:15:51.657 }, 00:15:51.657 "peer_address": { 00:15:51.657 "trtype": "TCP", 00:15:51.657 "adrfam": "IPv4", 00:15:51.657 "traddr": "10.0.0.1", 00:15:51.657 "trsvcid": "59354" 00:15:51.657 }, 00:15:51.657 "auth": { 00:15:51.657 "state": "completed", 00:15:51.657 "digest": "sha384", 00:15:51.657 "dhgroup": "ffdhe6144" 00:15:51.657 } 00:15:51.657 } 00:15:51.657 ]' 00:15:51.657 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.913 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.913 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.913 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:51.913 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.913 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.913 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.913 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.171 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:15:52.171 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:15:53.104 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.104 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.104 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.104 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.104 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.104 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.104 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:53.104 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:53.396 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:53.396 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.396 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.396 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:53.396 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:53.396 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.396 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.396 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.396 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.396 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.396 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.396 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.396 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.989 00:15:53.989 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.989 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.989 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.247 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.247 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.247 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.247 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.247 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.247 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.247 { 00:15:54.247 "cntlid": 85, 00:15:54.247 "qid": 0, 00:15:54.247 "state": "enabled", 00:15:54.247 "thread": "nvmf_tgt_poll_group_000", 00:15:54.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:54.247 "listen_address": { 00:15:54.247 "trtype": "TCP", 00:15:54.247 "adrfam": "IPv4", 00:15:54.247 "traddr": "10.0.0.2", 00:15:54.247 "trsvcid": "4420" 00:15:54.247 }, 00:15:54.247 "peer_address": { 00:15:54.247 "trtype": "TCP", 00:15:54.247 "adrfam": "IPv4", 00:15:54.247 "traddr": "10.0.0.1", 00:15:54.247 "trsvcid": "59380" 00:15:54.247 }, 00:15:54.247 "auth": { 00:15:54.247 "state": "completed", 00:15:54.247 "digest": "sha384", 00:15:54.247 "dhgroup": "ffdhe6144" 00:15:54.247 } 00:15:54.247 } 00:15:54.247 ]' 00:15:54.247 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.247 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.247 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.247 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:54.247 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.247 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.247 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.247 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.505 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:15:54.505 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:15:55.440 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.698 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.698 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.698 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.698 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.698 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.698 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:55.698 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:55.956 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:55.956 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.956 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.956 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:55.956 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:55.956 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.956 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:55.956 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.956 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.956 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.956 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:55.956 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.956 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.522 00:15:56.522 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.522 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.522 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.780 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.780 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.780 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.780 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.780 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.780 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.780 { 00:15:56.780 "cntlid": 87, 00:15:56.780 "qid": 0, 00:15:56.780 "state": "enabled", 00:15:56.780 "thread": "nvmf_tgt_poll_group_000", 00:15:56.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:56.780 "listen_address": { 00:15:56.780 "trtype": "TCP", 00:15:56.780 "adrfam": "IPv4", 00:15:56.780 "traddr": "10.0.0.2", 00:15:56.780 "trsvcid": "4420" 00:15:56.780 }, 00:15:56.780 "peer_address": { 00:15:56.780 "trtype": "TCP", 00:15:56.780 "adrfam": "IPv4", 00:15:56.780 "traddr": "10.0.0.1", 00:15:56.780 "trsvcid": "59408" 00:15:56.780 }, 00:15:56.780 "auth": { 00:15:56.780 "state": "completed", 00:15:56.780 "digest": "sha384", 00:15:56.780 "dhgroup": "ffdhe6144" 00:15:56.780 } 00:15:56.780 } 00:15:56.780 ]' 00:15:56.780 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.780 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.780 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.780 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:56.780 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.780 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.780 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.780 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.346 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:15:57.346 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:15:57.911 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.911 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.911 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.911 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.168 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.168 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.168 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.168 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:58.168 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:58.427 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:58.427 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.427 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.427 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:58.427 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:58.427 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.427 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.427 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.427 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.427 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.427 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.427 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.427 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.360 00:15:59.360 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.360 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.360 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.360 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.360 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.360 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.360 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.360 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.360 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.360 { 00:15:59.360 "cntlid": 89, 00:15:59.360 "qid": 0, 00:15:59.360 "state": "enabled", 00:15:59.360 "thread": "nvmf_tgt_poll_group_000", 00:15:59.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:59.360 "listen_address": { 00:15:59.360 "trtype": "TCP", 00:15:59.360 "adrfam": "IPv4", 00:15:59.360 "traddr": "10.0.0.2", 00:15:59.360 "trsvcid": "4420" 00:15:59.360 }, 00:15:59.360 "peer_address": { 00:15:59.360 "trtype": "TCP", 00:15:59.360 "adrfam": "IPv4", 00:15:59.360 "traddr": "10.0.0.1", 00:15:59.360 "trsvcid": "53786" 00:15:59.360 }, 00:15:59.360 "auth": { 00:15:59.360 "state": "completed", 00:15:59.360 "digest": "sha384", 00:15:59.360 "dhgroup": "ffdhe8192" 00:15:59.360 } 00:15:59.360 } 00:15:59.360 ]' 00:15:59.360 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.617 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.617 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.617 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:59.617 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.617 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.617 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.617 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.874 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:15:59.874 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:16:00.805 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.805 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.805 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.805 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.805 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.805 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.805 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:00.805 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:01.063 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:01.063 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.063 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.063 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:01.063 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:01.063 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.063 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.063 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.063 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.063 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.063 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.063 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.063 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.995 00:16:01.995 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.995 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.995 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.253 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.253 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.253 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.253 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.253 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.253 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.253 { 00:16:02.253 "cntlid": 91, 00:16:02.253 "qid": 0, 00:16:02.253 "state": "enabled", 00:16:02.253 "thread": "nvmf_tgt_poll_group_000", 00:16:02.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:02.253 "listen_address": { 00:16:02.253 "trtype": "TCP", 00:16:02.253 "adrfam": "IPv4", 00:16:02.253 "traddr": "10.0.0.2", 00:16:02.253 "trsvcid": "4420" 00:16:02.253 }, 00:16:02.253 "peer_address": { 00:16:02.253 "trtype": "TCP", 00:16:02.253 "adrfam": "IPv4", 00:16:02.253 "traddr": "10.0.0.1", 00:16:02.253 "trsvcid": "53810" 00:16:02.253 }, 00:16:02.253 "auth": { 00:16:02.253 "state": "completed", 00:16:02.253 "digest": "sha384", 00:16:02.253 "dhgroup": "ffdhe8192" 00:16:02.253 } 00:16:02.253 } 00:16:02.253 ]' 00:16:02.253 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.253 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.253 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.253 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:02.253 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.253 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.253 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.253 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.511 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:16:02.511 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:16:03.443 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.443 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.443 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.443 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.443 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.443 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.443 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:03.443 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:03.701 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:03.701 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.701 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.701 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:03.701 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:03.701 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.701 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.701 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.701 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.701 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.701 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.701 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.701 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.633 00:16:04.633 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.633 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.633 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.891 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.891 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.891 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.891 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.891 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.891 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.891 { 00:16:04.891 "cntlid": 93, 00:16:04.891 "qid": 0, 00:16:04.891 "state": "enabled", 00:16:04.891 "thread": "nvmf_tgt_poll_group_000", 00:16:04.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:04.891 "listen_address": { 00:16:04.891 "trtype": "TCP", 00:16:04.891 "adrfam": "IPv4", 00:16:04.891 "traddr": "10.0.0.2", 00:16:04.891 "trsvcid": "4420" 00:16:04.891 }, 00:16:04.891 "peer_address": { 00:16:04.891 "trtype": "TCP", 00:16:04.891 "adrfam": "IPv4", 00:16:04.891 "traddr": "10.0.0.1", 00:16:04.891 "trsvcid": "53832" 00:16:04.891 }, 00:16:04.891 "auth": { 00:16:04.891 "state": "completed", 00:16:04.891 "digest": "sha384", 00:16:04.891 "dhgroup": "ffdhe8192" 00:16:04.891 } 00:16:04.891 } 00:16:04.891 ]' 00:16:04.891 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.891 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.891 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.891 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:04.891 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.149 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.149 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.149 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.406 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:16:05.406 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:16:06.339 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.339 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.339 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.339 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.339 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.339 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.339 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:06.339 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:06.597 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:06.597 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.597 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.597 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:06.597 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:06.597 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.597 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:06.597 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.597 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.597 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.597 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:06.597 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.597 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.529 00:16:07.529 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.529 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.529 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.787 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.787 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.787 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.787 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.787 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.787 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.787 { 00:16:07.787 "cntlid": 95, 00:16:07.787 "qid": 0, 00:16:07.787 "state": "enabled", 00:16:07.787 "thread": "nvmf_tgt_poll_group_000", 00:16:07.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:07.787 "listen_address": { 00:16:07.787 "trtype": "TCP", 00:16:07.787 "adrfam": "IPv4", 00:16:07.787 "traddr": "10.0.0.2", 00:16:07.787 "trsvcid": "4420" 00:16:07.787 }, 00:16:07.787 "peer_address": { 00:16:07.787 "trtype": "TCP", 00:16:07.787 "adrfam": "IPv4", 00:16:07.787 "traddr": "10.0.0.1", 00:16:07.787 "trsvcid": "53858" 00:16:07.787 }, 00:16:07.787 "auth": { 00:16:07.787 "state": "completed", 00:16:07.787 "digest": "sha384", 00:16:07.787 "dhgroup": "ffdhe8192" 00:16:07.787 } 00:16:07.787 } 00:16:07.787 ]' 00:16:07.787 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.787 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.787 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.787 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:07.787 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.787 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.787 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.787 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.044 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:16:08.045 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:16:08.977 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.977 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.977 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.977 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.977 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.977 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:08.977 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.977 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.977 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.977 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:09.235 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:09.235 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.235 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:09.235 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:09.235 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:09.235 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.235 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.235 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.235 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.235 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.235 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.235 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.236 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.493 00:16:09.493 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.493 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.493 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.751 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.751 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.751 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.751 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.751 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.751 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.751 { 00:16:09.751 "cntlid": 97, 00:16:09.751 "qid": 0, 00:16:09.751 "state": "enabled", 00:16:09.751 "thread": "nvmf_tgt_poll_group_000", 00:16:09.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:09.751 "listen_address": { 00:16:09.751 "trtype": "TCP", 00:16:09.751 "adrfam": "IPv4", 00:16:09.751 "traddr": "10.0.0.2", 00:16:09.751 "trsvcid": "4420" 00:16:09.751 }, 00:16:09.751 "peer_address": { 00:16:09.751 "trtype": "TCP", 00:16:09.751 "adrfam": "IPv4", 00:16:09.751 "traddr": "10.0.0.1", 00:16:09.751 "trsvcid": "50446" 00:16:09.751 }, 00:16:09.751 "auth": { 00:16:09.751 "state": "completed", 00:16:09.751 "digest": "sha512", 00:16:09.751 "dhgroup": "null" 00:16:09.751 } 00:16:09.751 } 00:16:09.751 ]' 00:16:09.751 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.008 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.008 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.008 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:10.008 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.009 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.009 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.009 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.266 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:16:10.266 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:16:11.199 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.199 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.199 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.199 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.199 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.199 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.199 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:11.200 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:11.457 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:11.457 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.457 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:11.457 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:11.457 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:11.457 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.457 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.457 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.457 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.457 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.457 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.457 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.457 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.714 00:16:11.714 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.714 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.714 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.972 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.972 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.972 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.972 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.972 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.972 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.972 { 00:16:11.972 "cntlid": 99, 00:16:11.972 "qid": 0, 00:16:11.972 "state": "enabled", 00:16:11.972 "thread": "nvmf_tgt_poll_group_000", 00:16:11.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:11.972 "listen_address": { 00:16:11.972 "trtype": "TCP", 00:16:11.972 "adrfam": "IPv4", 00:16:11.972 "traddr": "10.0.0.2", 00:16:11.972 "trsvcid": "4420" 00:16:11.972 }, 00:16:11.972 "peer_address": { 00:16:11.972 "trtype": "TCP", 00:16:11.972 "adrfam": "IPv4", 00:16:11.972 "traddr": "10.0.0.1", 00:16:11.972 "trsvcid": "50480" 00:16:11.972 }, 00:16:11.972 "auth": { 00:16:11.972 "state": "completed", 00:16:11.972 "digest": "sha512", 00:16:11.972 "dhgroup": "null" 00:16:11.972 } 00:16:11.972 } 00:16:11.972 ]' 00:16:11.972 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.972 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.972 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.972 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:11.972 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.229 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.229 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.229 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.487 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:16:12.487 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.419 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.985 00:16:13.985 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.985 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.985 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.242 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.242 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.242 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.242 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.242 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.242 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.242 { 00:16:14.242 "cntlid": 101, 00:16:14.242 "qid": 0, 00:16:14.242 "state": "enabled", 00:16:14.242 "thread": "nvmf_tgt_poll_group_000", 00:16:14.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:14.242 "listen_address": { 00:16:14.242 "trtype": "TCP", 00:16:14.242 "adrfam": "IPv4", 00:16:14.242 "traddr": "10.0.0.2", 00:16:14.242 "trsvcid": "4420" 00:16:14.242 }, 00:16:14.242 "peer_address": { 00:16:14.242 "trtype": "TCP", 00:16:14.242 "adrfam": "IPv4", 00:16:14.242 "traddr": "10.0.0.1", 00:16:14.242 "trsvcid": "50520" 00:16:14.242 }, 00:16:14.242 "auth": { 00:16:14.242 "state": "completed", 00:16:14.242 "digest": "sha512", 00:16:14.242 "dhgroup": "null" 00:16:14.242 } 00:16:14.242 } 00:16:14.242 ]' 00:16:14.242 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.242 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.243 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.243 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:14.243 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.243 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.243 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.243 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.500 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:16:14.500 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:16:15.433 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.433 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.433 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.433 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.433 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.433 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.433 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:15.433 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:15.691 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:15.691 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.692 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:15.692 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:15.692 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:15.692 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.692 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:15.692 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.692 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.692 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.692 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:15.692 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.692 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.257 00:16:16.257 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.257 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.257 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.515 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.515 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.515 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.515 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.515 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.515 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.515 { 00:16:16.515 "cntlid": 103, 00:16:16.515 "qid": 0, 00:16:16.515 "state": "enabled", 00:16:16.515 "thread": "nvmf_tgt_poll_group_000", 00:16:16.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:16.515 "listen_address": { 00:16:16.515 "trtype": "TCP", 00:16:16.515 "adrfam": "IPv4", 00:16:16.515 "traddr": "10.0.0.2", 00:16:16.515 "trsvcid": "4420" 00:16:16.515 }, 00:16:16.515 "peer_address": { 00:16:16.515 "trtype": "TCP", 00:16:16.515 "adrfam": "IPv4", 00:16:16.515 "traddr": "10.0.0.1", 00:16:16.515 "trsvcid": "50558" 00:16:16.515 }, 00:16:16.515 "auth": { 00:16:16.515 "state": "completed", 00:16:16.515 "digest": "sha512", 00:16:16.515 "dhgroup": "null" 00:16:16.515 } 00:16:16.515 } 00:16:16.515 ]' 00:16:16.515 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.515 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.515 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.515 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:16.515 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.515 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.515 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.515 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.772 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:16:16.772 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:16:17.705 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.705 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.705 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.705 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.705 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.705 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.705 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.705 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:17.705 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:17.963 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:17.963 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.963 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:17.963 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:17.963 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:17.963 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.963 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.963 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.963 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.963 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.963 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.963 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.963 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.221 00:16:18.221 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.221 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.221 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.479 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.479 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.479 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.479 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.479 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.479 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.479 { 00:16:18.479 "cntlid": 105, 00:16:18.479 "qid": 0, 00:16:18.479 "state": "enabled", 00:16:18.479 "thread": "nvmf_tgt_poll_group_000", 00:16:18.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:18.479 "listen_address": { 00:16:18.479 "trtype": "TCP", 00:16:18.479 "adrfam": "IPv4", 00:16:18.479 "traddr": "10.0.0.2", 00:16:18.479 "trsvcid": "4420" 00:16:18.479 }, 00:16:18.479 "peer_address": { 00:16:18.479 "trtype": "TCP", 00:16:18.479 "adrfam": "IPv4", 00:16:18.479 "traddr": "10.0.0.1", 00:16:18.479 "trsvcid": "50340" 00:16:18.479 }, 00:16:18.479 "auth": { 00:16:18.479 "state": "completed", 00:16:18.479 "digest": "sha512", 00:16:18.479 "dhgroup": "ffdhe2048" 00:16:18.479 } 00:16:18.479 } 00:16:18.479 ]' 00:16:18.479 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.736 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.736 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.736 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:18.736 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.736 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.736 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.736 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.994 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:16:18.994 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:16:19.928 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.928 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.928 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.928 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.928 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.928 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.928 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:19.928 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:20.186 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:20.186 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.186 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:20.186 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:20.186 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:20.186 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.186 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.186 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.186 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.186 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.186 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.186 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.186 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.444 00:16:20.444 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.444 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.444 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.701 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.701 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.701 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.701 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.701 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.701 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.701 { 00:16:20.701 "cntlid": 107, 00:16:20.701 "qid": 0, 00:16:20.701 "state": "enabled", 00:16:20.701 "thread": "nvmf_tgt_poll_group_000", 00:16:20.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:20.701 "listen_address": { 00:16:20.701 "trtype": "TCP", 00:16:20.701 "adrfam": "IPv4", 00:16:20.701 "traddr": "10.0.0.2", 00:16:20.701 "trsvcid": "4420" 00:16:20.701 }, 00:16:20.701 "peer_address": { 00:16:20.701 "trtype": "TCP", 00:16:20.701 "adrfam": "IPv4", 00:16:20.701 "traddr": "10.0.0.1", 00:16:20.701 "trsvcid": "50352" 00:16:20.701 }, 00:16:20.701 "auth": { 00:16:20.701 "state": "completed", 00:16:20.701 "digest": "sha512", 00:16:20.701 "dhgroup": "ffdhe2048" 00:16:20.701 } 00:16:20.701 } 00:16:20.701 ]' 00:16:20.701 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.701 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.701 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.959 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.959 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.959 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.959 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.959 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.217 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:16:21.217 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:16:22.150 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.150 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.150 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.150 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.150 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.150 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.150 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:22.150 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:22.408 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:22.408 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.408 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.408 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:22.408 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:22.408 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.408 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.408 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.408 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.408 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.408 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.408 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.408 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.666 00:16:22.666 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.666 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.666 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.253 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.253 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.253 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.253 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.253 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.253 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.253 { 00:16:23.253 "cntlid": 109, 00:16:23.253 "qid": 0, 00:16:23.253 "state": "enabled", 00:16:23.253 "thread": "nvmf_tgt_poll_group_000", 00:16:23.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:23.253 "listen_address": { 00:16:23.253 "trtype": "TCP", 00:16:23.253 "adrfam": "IPv4", 00:16:23.253 "traddr": "10.0.0.2", 00:16:23.253 "trsvcid": "4420" 00:16:23.253 }, 00:16:23.253 "peer_address": { 00:16:23.253 "trtype": "TCP", 00:16:23.253 "adrfam": "IPv4", 00:16:23.253 "traddr": "10.0.0.1", 00:16:23.253 "trsvcid": "50366" 00:16:23.253 }, 00:16:23.253 "auth": { 00:16:23.253 "state": "completed", 00:16:23.253 "digest": "sha512", 00:16:23.253 "dhgroup": "ffdhe2048" 00:16:23.253 } 00:16:23.253 } 00:16:23.253 ]' 00:16:23.253 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.253 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.253 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.253 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:23.253 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.253 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.253 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.253 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.543 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:16:23.544 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:16:24.481 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.481 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.481 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.481 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.481 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.481 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.481 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:24.481 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:24.739 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:24.739 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.739 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.739 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:24.739 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:24.739 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.739 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:24.739 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.739 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.739 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.739 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:24.739 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.739 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.997 00:16:24.997 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.997 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.997 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.255 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.255 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.255 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.255 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.255 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.255 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.255 { 00:16:25.255 "cntlid": 111, 00:16:25.255 "qid": 0, 00:16:25.255 "state": "enabled", 00:16:25.255 "thread": "nvmf_tgt_poll_group_000", 00:16:25.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:25.255 "listen_address": { 00:16:25.255 "trtype": "TCP", 00:16:25.255 "adrfam": "IPv4", 00:16:25.255 "traddr": "10.0.0.2", 00:16:25.255 "trsvcid": "4420" 00:16:25.255 }, 00:16:25.255 "peer_address": { 00:16:25.255 "trtype": "TCP", 00:16:25.255 "adrfam": "IPv4", 00:16:25.255 "traddr": "10.0.0.1", 00:16:25.255 "trsvcid": "50378" 00:16:25.255 }, 00:16:25.255 "auth": { 00:16:25.255 "state": "completed", 00:16:25.255 "digest": "sha512", 00:16:25.255 "dhgroup": "ffdhe2048" 00:16:25.255 } 00:16:25.255 } 00:16:25.255 ]' 00:16:25.255 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.255 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.255 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.255 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.255 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.512 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.512 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.512 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.771 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:16:25.771 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:16:26.703 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.703 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.703 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.703 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.703 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.703 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.703 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.703 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:26.703 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:26.960 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:26.960 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.960 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.960 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:26.960 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.961 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.961 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.961 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.961 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.961 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.961 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.961 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.961 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.218 00:16:27.218 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.218 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.218 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.476 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.476 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.476 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.476 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.476 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.476 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.476 { 00:16:27.476 "cntlid": 113, 00:16:27.476 "qid": 0, 00:16:27.476 "state": "enabled", 00:16:27.476 "thread": "nvmf_tgt_poll_group_000", 00:16:27.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:27.476 "listen_address": { 00:16:27.476 "trtype": "TCP", 00:16:27.476 "adrfam": "IPv4", 00:16:27.476 "traddr": "10.0.0.2", 00:16:27.476 "trsvcid": "4420" 00:16:27.476 }, 00:16:27.476 "peer_address": { 00:16:27.476 "trtype": "TCP", 00:16:27.476 "adrfam": "IPv4", 00:16:27.476 "traddr": "10.0.0.1", 00:16:27.476 "trsvcid": "50398" 00:16:27.476 }, 00:16:27.476 "auth": { 00:16:27.476 "state": "completed", 00:16:27.476 "digest": "sha512", 00:16:27.476 "dhgroup": "ffdhe3072" 00:16:27.476 } 00:16:27.476 } 00:16:27.476 ]' 00:16:27.476 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.476 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.476 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.734 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.734 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.734 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.734 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.734 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.991 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:16:27.991 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:16:28.923 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.923 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.923 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.923 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.923 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.923 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.923 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.923 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:29.181 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:29.181 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.181 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.181 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:29.181 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:29.181 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.181 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.181 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.181 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.181 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.181 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.181 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.181 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.438 00:16:29.438 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.438 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.439 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.696 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.696 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.696 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.696 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.696 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.696 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.696 { 00:16:29.696 "cntlid": 115, 00:16:29.696 "qid": 0, 00:16:29.696 "state": "enabled", 00:16:29.696 "thread": "nvmf_tgt_poll_group_000", 00:16:29.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:29.696 "listen_address": { 00:16:29.696 "trtype": "TCP", 00:16:29.696 "adrfam": "IPv4", 00:16:29.696 "traddr": "10.0.0.2", 00:16:29.696 "trsvcid": "4420" 00:16:29.696 }, 00:16:29.696 "peer_address": { 00:16:29.696 "trtype": "TCP", 00:16:29.696 "adrfam": "IPv4", 00:16:29.696 "traddr": "10.0.0.1", 00:16:29.696 "trsvcid": "34288" 00:16:29.696 }, 00:16:29.696 "auth": { 00:16:29.696 "state": "completed", 00:16:29.696 "digest": "sha512", 00:16:29.696 "dhgroup": "ffdhe3072" 00:16:29.696 } 00:16:29.696 } 00:16:29.696 ]' 00:16:29.696 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.696 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.696 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.953 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:29.953 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.953 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.953 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.953 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.211 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:16:30.211 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:16:31.144 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.144 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.144 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.144 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.144 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.144 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.144 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:31.144 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:31.401 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:31.401 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.401 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.401 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:31.401 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:31.401 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.401 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.401 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.401 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.401 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.401 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.401 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.401 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.659 00:16:31.659 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.659 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.659 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.918 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.918 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.918 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.918 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.918 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.918 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.918 { 00:16:31.918 "cntlid": 117, 00:16:31.918 "qid": 0, 00:16:31.918 "state": "enabled", 00:16:31.918 "thread": "nvmf_tgt_poll_group_000", 00:16:31.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:31.918 "listen_address": { 00:16:31.918 "trtype": "TCP", 00:16:31.918 "adrfam": "IPv4", 00:16:31.918 "traddr": "10.0.0.2", 00:16:31.918 "trsvcid": "4420" 00:16:31.918 }, 00:16:31.918 "peer_address": { 00:16:31.918 "trtype": "TCP", 00:16:31.918 "adrfam": "IPv4", 00:16:31.918 "traddr": "10.0.0.1", 00:16:31.918 "trsvcid": "34314" 00:16:31.918 }, 00:16:31.918 "auth": { 00:16:31.918 "state": "completed", 00:16:31.918 "digest": "sha512", 00:16:31.918 "dhgroup": "ffdhe3072" 00:16:31.918 } 00:16:31.918 } 00:16:31.918 ]' 00:16:31.918 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.176 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.176 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.176 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.176 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.176 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.176 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.176 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.434 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:16:32.434 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:16:33.366 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.367 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.367 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.367 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.367 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.367 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.367 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:33.367 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:33.624 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:33.624 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.624 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.624 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:33.624 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.624 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.624 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:33.624 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.624 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.624 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.624 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.624 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.624 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.882 00:16:34.139 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.139 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.139 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.397 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.397 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.397 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.397 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.397 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.397 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.397 { 00:16:34.397 "cntlid": 119, 00:16:34.397 "qid": 0, 00:16:34.397 "state": "enabled", 00:16:34.397 "thread": "nvmf_tgt_poll_group_000", 00:16:34.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:34.397 "listen_address": { 00:16:34.397 "trtype": "TCP", 00:16:34.397 "adrfam": "IPv4", 00:16:34.397 "traddr": "10.0.0.2", 00:16:34.397 "trsvcid": "4420" 00:16:34.397 }, 00:16:34.397 "peer_address": { 00:16:34.397 "trtype": "TCP", 00:16:34.397 "adrfam": "IPv4", 00:16:34.397 "traddr": "10.0.0.1", 00:16:34.397 "trsvcid": "34346" 00:16:34.397 }, 00:16:34.397 "auth": { 00:16:34.397 "state": "completed", 00:16:34.397 "digest": "sha512", 00:16:34.397 "dhgroup": "ffdhe3072" 00:16:34.397 } 00:16:34.397 } 00:16:34.397 ]' 00:16:34.397 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.397 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.397 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.397 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.397 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.397 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.397 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.397 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.655 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:16:34.655 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:16:35.587 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.587 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.587 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.587 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.587 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.587 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.587 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.587 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:35.587 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:35.845 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:35.845 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.846 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.846 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:35.846 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.846 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.846 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.846 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.846 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.846 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.846 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.846 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.846 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.411 00:16:36.411 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.411 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.411 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.411 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.411 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.411 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.411 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.669 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.669 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.669 { 00:16:36.669 "cntlid": 121, 00:16:36.669 "qid": 0, 00:16:36.669 "state": "enabled", 00:16:36.669 "thread": "nvmf_tgt_poll_group_000", 00:16:36.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:36.669 "listen_address": { 00:16:36.669 "trtype": "TCP", 00:16:36.669 "adrfam": "IPv4", 00:16:36.669 "traddr": "10.0.0.2", 00:16:36.669 "trsvcid": "4420" 00:16:36.669 }, 00:16:36.669 "peer_address": { 00:16:36.669 "trtype": "TCP", 00:16:36.669 "adrfam": "IPv4", 00:16:36.669 "traddr": "10.0.0.1", 00:16:36.669 "trsvcid": "34368" 00:16:36.669 }, 00:16:36.669 "auth": { 00:16:36.669 "state": "completed", 00:16:36.669 "digest": "sha512", 00:16:36.669 "dhgroup": "ffdhe4096" 00:16:36.669 } 00:16:36.669 } 00:16:36.669 ]' 00:16:36.669 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.669 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.669 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.669 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:36.669 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.669 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.669 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.669 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.927 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:16:36.927 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:16:37.859 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.859 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.859 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.859 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.859 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.859 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.859 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:37.859 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:38.117 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:38.117 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.117 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.117 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:38.117 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:38.117 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.117 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.117 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.117 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.117 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.117 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.117 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.117 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.375 00:16:38.375 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.375 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.375 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.940 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.940 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.940 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.940 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.940 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.940 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.940 { 00:16:38.940 "cntlid": 123, 00:16:38.940 "qid": 0, 00:16:38.940 "state": "enabled", 00:16:38.940 "thread": "nvmf_tgt_poll_group_000", 00:16:38.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:38.940 "listen_address": { 00:16:38.940 "trtype": "TCP", 00:16:38.940 "adrfam": "IPv4", 00:16:38.940 "traddr": "10.0.0.2", 00:16:38.940 "trsvcid": "4420" 00:16:38.940 }, 00:16:38.940 "peer_address": { 00:16:38.940 "trtype": "TCP", 00:16:38.940 "adrfam": "IPv4", 00:16:38.940 "traddr": "10.0.0.1", 00:16:38.940 "trsvcid": "43810" 00:16:38.940 }, 00:16:38.940 "auth": { 00:16:38.940 "state": "completed", 00:16:38.940 "digest": "sha512", 00:16:38.940 "dhgroup": "ffdhe4096" 00:16:38.940 } 00:16:38.940 } 00:16:38.940 ]' 00:16:38.940 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.940 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.940 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.940 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.940 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.940 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.940 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.940 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.197 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:16:39.198 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:16:40.130 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.130 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.130 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.130 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.130 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.130 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.130 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:40.130 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:40.388 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:40.388 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.388 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.388 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:40.388 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:40.388 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.388 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.388 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.388 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.388 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.388 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.388 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.388 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.645 00:16:40.645 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.645 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.645 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.208 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.208 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.208 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.208 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.208 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.208 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.208 { 00:16:41.208 "cntlid": 125, 00:16:41.208 "qid": 0, 00:16:41.208 "state": "enabled", 00:16:41.208 "thread": "nvmf_tgt_poll_group_000", 00:16:41.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:41.208 "listen_address": { 00:16:41.208 "trtype": "TCP", 00:16:41.208 "adrfam": "IPv4", 00:16:41.208 "traddr": "10.0.0.2", 00:16:41.208 "trsvcid": "4420" 00:16:41.208 }, 00:16:41.208 "peer_address": { 00:16:41.208 "trtype": "TCP", 00:16:41.208 "adrfam": "IPv4", 00:16:41.208 "traddr": "10.0.0.1", 00:16:41.208 "trsvcid": "43846" 00:16:41.208 }, 00:16:41.208 "auth": { 00:16:41.208 "state": "completed", 00:16:41.208 "digest": "sha512", 00:16:41.208 "dhgroup": "ffdhe4096" 00:16:41.208 } 00:16:41.208 } 00:16:41.208 ]' 00:16:41.208 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.208 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.208 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.208 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:41.208 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.208 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.208 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.208 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.466 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:16:41.466 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:16:42.398 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.398 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.398 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.398 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.398 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.398 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.398 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:42.398 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:42.656 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:42.656 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.656 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:42.656 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:42.656 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:42.656 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.656 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:42.656 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.656 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.656 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.656 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:42.656 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.656 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.914 00:16:42.914 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.914 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.914 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.172 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.172 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.172 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.172 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.430 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.430 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.430 { 00:16:43.430 "cntlid": 127, 00:16:43.430 "qid": 0, 00:16:43.430 "state": "enabled", 00:16:43.430 "thread": "nvmf_tgt_poll_group_000", 00:16:43.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:43.430 "listen_address": { 00:16:43.430 "trtype": "TCP", 00:16:43.430 "adrfam": "IPv4", 00:16:43.430 "traddr": "10.0.0.2", 00:16:43.430 "trsvcid": "4420" 00:16:43.430 }, 00:16:43.430 "peer_address": { 00:16:43.430 "trtype": "TCP", 00:16:43.430 "adrfam": "IPv4", 00:16:43.430 "traddr": "10.0.0.1", 00:16:43.430 "trsvcid": "43870" 00:16:43.430 }, 00:16:43.430 "auth": { 00:16:43.430 "state": "completed", 00:16:43.430 "digest": "sha512", 00:16:43.430 "dhgroup": "ffdhe4096" 00:16:43.430 } 00:16:43.430 } 00:16:43.430 ]' 00:16:43.430 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.430 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.430 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.430 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.430 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.430 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.430 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.430 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.688 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:16:43.688 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:16:44.619 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.619 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.619 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.619 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.619 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.619 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.619 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.619 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:44.619 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:44.876 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:44.876 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.876 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.876 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:44.876 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:44.876 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.876 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.876 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.876 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.876 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.876 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.876 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.876 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.440 00:16:45.440 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.440 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.440 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.698 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.698 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.698 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.698 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.698 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.698 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.698 { 00:16:45.698 "cntlid": 129, 00:16:45.698 "qid": 0, 00:16:45.698 "state": "enabled", 00:16:45.698 "thread": "nvmf_tgt_poll_group_000", 00:16:45.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:45.698 "listen_address": { 00:16:45.698 "trtype": "TCP", 00:16:45.698 "adrfam": "IPv4", 00:16:45.698 "traddr": "10.0.0.2", 00:16:45.698 "trsvcid": "4420" 00:16:45.698 }, 00:16:45.698 "peer_address": { 00:16:45.698 "trtype": "TCP", 00:16:45.698 "adrfam": "IPv4", 00:16:45.698 "traddr": "10.0.0.1", 00:16:45.698 "trsvcid": "43892" 00:16:45.698 }, 00:16:45.698 "auth": { 00:16:45.698 "state": "completed", 00:16:45.698 "digest": "sha512", 00:16:45.698 "dhgroup": "ffdhe6144" 00:16:45.698 } 00:16:45.698 } 00:16:45.698 ]' 00:16:45.698 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.698 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.698 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.956 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.956 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.956 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.956 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.956 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.214 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:16:46.214 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:16:47.146 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.146 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.146 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.146 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.146 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.146 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.146 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.146 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.404 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:47.404 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.404 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.404 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:47.404 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:47.404 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.404 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.404 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.404 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.404 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.404 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.404 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.404 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.968 00:16:47.968 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.968 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.968 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.226 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.226 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.227 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.227 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.227 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.227 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.227 { 00:16:48.227 "cntlid": 131, 00:16:48.227 "qid": 0, 00:16:48.227 "state": "enabled", 00:16:48.227 "thread": "nvmf_tgt_poll_group_000", 00:16:48.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:48.227 "listen_address": { 00:16:48.227 "trtype": "TCP", 00:16:48.227 "adrfam": "IPv4", 00:16:48.227 "traddr": "10.0.0.2", 00:16:48.227 "trsvcid": "4420" 00:16:48.227 }, 00:16:48.227 "peer_address": { 00:16:48.227 "trtype": "TCP", 00:16:48.227 "adrfam": "IPv4", 00:16:48.227 "traddr": "10.0.0.1", 00:16:48.227 "trsvcid": "43906" 00:16:48.227 }, 00:16:48.227 "auth": { 00:16:48.227 "state": "completed", 00:16:48.227 "digest": "sha512", 00:16:48.227 "dhgroup": "ffdhe6144" 00:16:48.227 } 00:16:48.227 } 00:16:48.227 ]' 00:16:48.227 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.227 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.227 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.227 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.227 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.227 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.227 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.227 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.484 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:16:48.484 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:16:49.418 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.418 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.418 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.418 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.418 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.418 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.418 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.418 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.676 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:49.676 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.676 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.676 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:49.676 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.676 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.676 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.676 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.676 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.676 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.676 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.676 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.676 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.241 00:16:50.241 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.241 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.241 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.499 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.499 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.499 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.499 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.499 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.499 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.499 { 00:16:50.499 "cntlid": 133, 00:16:50.499 "qid": 0, 00:16:50.499 "state": "enabled", 00:16:50.499 "thread": "nvmf_tgt_poll_group_000", 00:16:50.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:50.499 "listen_address": { 00:16:50.499 "trtype": "TCP", 00:16:50.499 "adrfam": "IPv4", 00:16:50.499 "traddr": "10.0.0.2", 00:16:50.499 "trsvcid": "4420" 00:16:50.499 }, 00:16:50.499 "peer_address": { 00:16:50.499 "trtype": "TCP", 00:16:50.499 "adrfam": "IPv4", 00:16:50.499 "traddr": "10.0.0.1", 00:16:50.499 "trsvcid": "41862" 00:16:50.499 }, 00:16:50.499 "auth": { 00:16:50.499 "state": "completed", 00:16:50.499 "digest": "sha512", 00:16:50.499 "dhgroup": "ffdhe6144" 00:16:50.499 } 00:16:50.499 } 00:16:50.499 ]' 00:16:50.499 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.757 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.757 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.757 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.757 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.757 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.757 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.757 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.015 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:16:51.015 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:16:51.947 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.947 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.948 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.948 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.948 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.948 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.948 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:51.948 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:52.205 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:52.205 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.205 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.205 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:52.205 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.205 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.205 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:52.205 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.205 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.205 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.205 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.205 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.205 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.770 00:16:52.770 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.770 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.770 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.033 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.033 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.033 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.033 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.033 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.033 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.033 { 00:16:53.033 "cntlid": 135, 00:16:53.033 "qid": 0, 00:16:53.033 "state": "enabled", 00:16:53.033 "thread": "nvmf_tgt_poll_group_000", 00:16:53.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:53.033 "listen_address": { 00:16:53.033 "trtype": "TCP", 00:16:53.033 "adrfam": "IPv4", 00:16:53.033 "traddr": "10.0.0.2", 00:16:53.033 "trsvcid": "4420" 00:16:53.033 }, 00:16:53.033 "peer_address": { 00:16:53.033 "trtype": "TCP", 00:16:53.033 "adrfam": "IPv4", 00:16:53.033 "traddr": "10.0.0.1", 00:16:53.033 "trsvcid": "41884" 00:16:53.033 }, 00:16:53.033 "auth": { 00:16:53.033 "state": "completed", 00:16:53.033 "digest": "sha512", 00:16:53.033 "dhgroup": "ffdhe6144" 00:16:53.033 } 00:16:53.033 } 00:16:53.033 ]' 00:16:53.033 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.033 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.033 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.033 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.033 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.033 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.033 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.033 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.329 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:16:53.329 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:16:54.295 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.295 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.295 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.295 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.295 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.295 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.295 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.295 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.295 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.553 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:54.553 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.553 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.553 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:54.553 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:54.553 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.553 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.553 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.553 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.553 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.553 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.553 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.553 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.486 00:16:55.486 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.486 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.486 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.744 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.744 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.744 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.744 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.744 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.744 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.744 { 00:16:55.744 "cntlid": 137, 00:16:55.744 "qid": 0, 00:16:55.744 "state": "enabled", 00:16:55.744 "thread": "nvmf_tgt_poll_group_000", 00:16:55.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:55.744 "listen_address": { 00:16:55.744 "trtype": "TCP", 00:16:55.744 "adrfam": "IPv4", 00:16:55.744 "traddr": "10.0.0.2", 00:16:55.744 "trsvcid": "4420" 00:16:55.744 }, 00:16:55.744 "peer_address": { 00:16:55.744 "trtype": "TCP", 00:16:55.744 "adrfam": "IPv4", 00:16:55.744 "traddr": "10.0.0.1", 00:16:55.744 "trsvcid": "41896" 00:16:55.744 }, 00:16:55.744 "auth": { 00:16:55.744 "state": "completed", 00:16:55.744 "digest": "sha512", 00:16:55.744 "dhgroup": "ffdhe8192" 00:16:55.744 } 00:16:55.744 } 00:16:55.744 ]' 00:16:55.744 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.744 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.744 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.744 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.744 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.744 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.744 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.744 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.309 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:16:56.309 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.242 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.175 00:16:58.175 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.175 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.175 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.433 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.433 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.433 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.433 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.433 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.433 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.433 { 00:16:58.433 "cntlid": 139, 00:16:58.433 "qid": 0, 00:16:58.433 "state": "enabled", 00:16:58.433 "thread": "nvmf_tgt_poll_group_000", 00:16:58.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:58.433 "listen_address": { 00:16:58.433 "trtype": "TCP", 00:16:58.433 "adrfam": "IPv4", 00:16:58.433 "traddr": "10.0.0.2", 00:16:58.433 "trsvcid": "4420" 00:16:58.433 }, 00:16:58.433 "peer_address": { 00:16:58.433 "trtype": "TCP", 00:16:58.433 "adrfam": "IPv4", 00:16:58.433 "traddr": "10.0.0.1", 00:16:58.433 "trsvcid": "41920" 00:16:58.433 }, 00:16:58.433 "auth": { 00:16:58.433 "state": "completed", 00:16:58.433 "digest": "sha512", 00:16:58.433 "dhgroup": "ffdhe8192" 00:16:58.433 } 00:16:58.433 } 00:16:58.433 ]' 00:16:58.433 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.433 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.433 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.433 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.433 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.691 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.691 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.691 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.948 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:16:58.948 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: --dhchap-ctrl-secret DHHC-1:02:MzhkMDZkZDY1OWVhODExMWY5NDIxZDRjYWI2OWYxNzNmMTU0ZDdjNjg4NWYzMGYyMShZvg==: 00:16:59.881 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.881 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.881 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.881 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.881 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.881 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.881 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.881 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:00.139 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:00.139 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.139 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.139 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.139 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.139 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.139 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.139 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.139 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.139 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.139 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.139 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.139 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.072 00:17:01.072 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.072 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.072 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.329 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.329 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.329 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.329 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.329 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.329 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.329 { 00:17:01.329 "cntlid": 141, 00:17:01.329 "qid": 0, 00:17:01.329 "state": "enabled", 00:17:01.329 "thread": "nvmf_tgt_poll_group_000", 00:17:01.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:01.330 "listen_address": { 00:17:01.330 "trtype": "TCP", 00:17:01.330 "adrfam": "IPv4", 00:17:01.330 "traddr": "10.0.0.2", 00:17:01.330 "trsvcid": "4420" 00:17:01.330 }, 00:17:01.330 "peer_address": { 00:17:01.330 "trtype": "TCP", 00:17:01.330 "adrfam": "IPv4", 00:17:01.330 "traddr": "10.0.0.1", 00:17:01.330 "trsvcid": "56404" 00:17:01.330 }, 00:17:01.330 "auth": { 00:17:01.330 "state": "completed", 00:17:01.330 "digest": "sha512", 00:17:01.330 "dhgroup": "ffdhe8192" 00:17:01.330 } 00:17:01.330 } 00:17:01.330 ]' 00:17:01.330 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.330 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.330 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.330 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.330 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.330 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.330 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.330 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.587 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:17:01.587 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:01:YTMxYjk3NmU5MTQ1ZDE0ODFmYWYzY2EzYTRjMGJlNGHC1zS4: 00:17:02.519 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.520 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.520 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.520 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.520 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.520 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.520 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:02.520 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:02.777 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:02.777 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.777 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.777 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:02.777 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:02.777 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.777 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:02.777 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.777 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.777 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.777 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.777 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.777 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.709 00:17:03.709 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.709 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.709 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.966 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.966 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.966 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.966 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.966 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.966 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.966 { 00:17:03.966 "cntlid": 143, 00:17:03.966 "qid": 0, 00:17:03.966 "state": "enabled", 00:17:03.966 "thread": "nvmf_tgt_poll_group_000", 00:17:03.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:03.966 "listen_address": { 00:17:03.966 "trtype": "TCP", 00:17:03.966 "adrfam": "IPv4", 00:17:03.966 "traddr": "10.0.0.2", 00:17:03.966 "trsvcid": "4420" 00:17:03.966 }, 00:17:03.966 "peer_address": { 00:17:03.966 "trtype": "TCP", 00:17:03.966 "adrfam": "IPv4", 00:17:03.966 "traddr": "10.0.0.1", 00:17:03.966 "trsvcid": "56432" 00:17:03.966 }, 00:17:03.966 "auth": { 00:17:03.966 "state": "completed", 00:17:03.966 "digest": "sha512", 00:17:03.966 "dhgroup": "ffdhe8192" 00:17:03.966 } 00:17:03.966 } 00:17:03.966 ]' 00:17:03.966 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.966 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.966 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.224 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.224 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.224 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.224 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.224 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.482 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:17:04.482 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:17:05.413 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.413 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:05.413 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.413 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.413 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.413 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:05.413 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:05.413 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:05.413 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.413 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.413 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.671 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:05.671 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.671 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.671 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:05.671 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:05.671 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.671 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.671 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.671 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.671 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.671 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.671 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.671 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.603 00:17:06.603 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.603 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.603 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.859 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.859 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.859 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.859 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.859 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.859 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.859 { 00:17:06.859 "cntlid": 145, 00:17:06.859 "qid": 0, 00:17:06.859 "state": "enabled", 00:17:06.859 "thread": "nvmf_tgt_poll_group_000", 00:17:06.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:06.859 "listen_address": { 00:17:06.859 "trtype": "TCP", 00:17:06.859 "adrfam": "IPv4", 00:17:06.859 "traddr": "10.0.0.2", 00:17:06.859 "trsvcid": "4420" 00:17:06.859 }, 00:17:06.859 "peer_address": { 00:17:06.859 "trtype": "TCP", 00:17:06.859 "adrfam": "IPv4", 00:17:06.859 "traddr": "10.0.0.1", 00:17:06.859 "trsvcid": "56448" 00:17:06.859 }, 00:17:06.859 "auth": { 00:17:06.859 "state": "completed", 00:17:06.859 "digest": "sha512", 00:17:06.859 "dhgroup": "ffdhe8192" 00:17:06.859 } 00:17:06.859 } 00:17:06.859 ]' 00:17:06.859 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.859 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.859 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.116 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.116 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.116 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.116 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.116 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.374 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:17:07.374 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzhkM2Q1NWIyMmE5NDlhNjU2OTk5MjkyMGRmYzRmOTQ2NjYzNTJkZmRjYzg4YTcwHJRxmQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY5NGFlODRhMTA3NDY0MmIyNjdlN2E2ZmRhY2Q2YTVhYmFlMTYwMmRlYTQyOWQ1Yzk1NWZhM2U2MGQyY2I2NBU8Ifw=: 00:17:08.306 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:08.307 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:09.240 request: 00:17:09.240 { 00:17:09.240 "name": "nvme0", 00:17:09.240 "trtype": "tcp", 00:17:09.240 "traddr": "10.0.0.2", 00:17:09.240 "adrfam": "ipv4", 00:17:09.240 "trsvcid": "4420", 00:17:09.240 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:09.240 "prchk_reftag": false, 00:17:09.240 "prchk_guard": false, 00:17:09.240 "hdgst": false, 00:17:09.240 "ddgst": false, 00:17:09.240 "dhchap_key": "key2", 00:17:09.240 "allow_unrecognized_csi": false, 00:17:09.240 "method": "bdev_nvme_attach_controller", 00:17:09.240 "req_id": 1 00:17:09.240 } 00:17:09.240 Got JSON-RPC error response 00:17:09.240 response: 00:17:09.240 { 00:17:09.240 "code": -5, 00:17:09.240 "message": "Input/output error" 00:17:09.240 } 00:17:09.240 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.240 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.240 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.240 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.240 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.240 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.240 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.240 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.240 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.240 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.240 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.240 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.240 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.240 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:09.240 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.240 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:09.241 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.241 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:09.241 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.241 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.241 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.241 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.806 request: 00:17:09.806 { 00:17:09.806 "name": "nvme0", 00:17:09.806 "trtype": "tcp", 00:17:09.806 "traddr": "10.0.0.2", 00:17:09.806 "adrfam": "ipv4", 00:17:09.806 "trsvcid": "4420", 00:17:09.806 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:09.806 "prchk_reftag": false, 00:17:09.806 "prchk_guard": false, 00:17:09.806 "hdgst": false, 00:17:09.806 "ddgst": false, 00:17:09.806 "dhchap_key": "key1", 00:17:09.806 "dhchap_ctrlr_key": "ckey2", 00:17:09.806 "allow_unrecognized_csi": false, 00:17:09.806 "method": "bdev_nvme_attach_controller", 00:17:09.806 "req_id": 1 00:17:09.806 } 00:17:09.806 Got JSON-RPC error response 00:17:09.806 response: 00:17:09.806 { 00:17:09.806 "code": -5, 00:17:09.806 "message": "Input/output error" 00:17:09.806 } 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.806 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.739 request: 00:17:10.739 { 00:17:10.739 "name": "nvme0", 00:17:10.739 "trtype": "tcp", 00:17:10.739 "traddr": "10.0.0.2", 00:17:10.739 "adrfam": "ipv4", 00:17:10.739 "trsvcid": "4420", 00:17:10.739 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:10.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:10.739 "prchk_reftag": false, 00:17:10.739 "prchk_guard": false, 00:17:10.739 "hdgst": false, 00:17:10.739 "ddgst": false, 00:17:10.739 "dhchap_key": "key1", 00:17:10.739 "dhchap_ctrlr_key": "ckey1", 00:17:10.739 "allow_unrecognized_csi": false, 00:17:10.739 "method": "bdev_nvme_attach_controller", 00:17:10.739 "req_id": 1 00:17:10.739 } 00:17:10.739 Got JSON-RPC error response 00:17:10.739 response: 00:17:10.739 { 00:17:10.739 "code": -5, 00:17:10.739 "message": "Input/output error" 00:17:10.739 } 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2378982 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2378982 ']' 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2378982 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2378982 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2378982' 00:17:10.739 killing process with pid 2378982 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2378982 00:17:10.739 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2378982 00:17:10.996 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:10.996 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:10.996 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.996 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.996 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2401771 00:17:10.996 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:10.996 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2401771 00:17:10.996 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2401771 ']' 00:17:10.996 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.996 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.996 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.996 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.996 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.254 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.254 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:11.254 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:11.254 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:11.254 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.254 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.254 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:11.254 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2401771 00:17:11.254 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2401771 ']' 00:17:11.254 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.254 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.254 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.254 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.254 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.511 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.511 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:11.511 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:11.511 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.511 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.769 null0 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tQ2 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.bPO ]] 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bPO 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.uR8 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.4MZ ]] 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4MZ 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Blp 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.n7y ]] 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n7y 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.jHb 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.769 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.142 nvme0n1 00:17:13.142 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.142 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.142 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.707 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.707 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.707 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.707 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.707 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.707 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.707 { 00:17:13.707 "cntlid": 1, 00:17:13.707 "qid": 0, 00:17:13.707 "state": "enabled", 00:17:13.707 "thread": "nvmf_tgt_poll_group_000", 00:17:13.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:13.707 "listen_address": { 00:17:13.707 "trtype": "TCP", 00:17:13.707 "adrfam": "IPv4", 00:17:13.707 "traddr": "10.0.0.2", 00:17:13.707 "trsvcid": "4420" 00:17:13.707 }, 00:17:13.707 "peer_address": { 00:17:13.707 "trtype": "TCP", 00:17:13.707 "adrfam": "IPv4", 00:17:13.707 "traddr": "10.0.0.1", 00:17:13.707 "trsvcid": "36162" 00:17:13.707 }, 00:17:13.707 "auth": { 00:17:13.707 "state": "completed", 00:17:13.708 "digest": "sha512", 00:17:13.708 "dhgroup": "ffdhe8192" 00:17:13.708 } 00:17:13.708 } 00:17:13.708 ]' 00:17:13.708 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.708 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.708 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.708 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.708 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.708 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.708 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.708 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.965 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:17:13.965 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:17:14.897 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.897 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.897 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.897 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.897 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.897 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:14.897 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.897 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.897 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.897 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:14.897 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:15.155 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:15.155 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:15.155 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:15.155 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:15.155 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.155 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:15.155 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.155 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.155 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.155 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.412 request: 00:17:15.412 { 00:17:15.412 "name": "nvme0", 00:17:15.412 "trtype": "tcp", 00:17:15.412 "traddr": "10.0.0.2", 00:17:15.412 "adrfam": "ipv4", 00:17:15.412 "trsvcid": "4420", 00:17:15.412 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:15.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:15.412 "prchk_reftag": false, 00:17:15.412 "prchk_guard": false, 00:17:15.412 "hdgst": false, 00:17:15.412 "ddgst": false, 00:17:15.412 "dhchap_key": "key3", 00:17:15.412 "allow_unrecognized_csi": false, 00:17:15.412 "method": "bdev_nvme_attach_controller", 00:17:15.412 "req_id": 1 00:17:15.412 } 00:17:15.412 Got JSON-RPC error response 00:17:15.412 response: 00:17:15.412 { 00:17:15.412 "code": -5, 00:17:15.412 "message": "Input/output error" 00:17:15.412 } 00:17:15.412 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:15.412 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:15.412 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:15.412 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:15.412 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:15.412 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:15.412 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:15.412 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:15.670 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:15.670 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:15.670 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:15.670 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:15.670 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.670 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:15.670 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.670 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.670 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.670 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.928 request: 00:17:15.928 { 00:17:15.928 "name": "nvme0", 00:17:15.928 "trtype": "tcp", 00:17:15.928 "traddr": "10.0.0.2", 00:17:15.928 "adrfam": "ipv4", 00:17:15.928 "trsvcid": "4420", 00:17:15.928 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:15.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:15.928 "prchk_reftag": false, 00:17:15.928 "prchk_guard": false, 00:17:15.928 "hdgst": false, 00:17:15.928 "ddgst": false, 00:17:15.928 "dhchap_key": "key3", 00:17:15.928 "allow_unrecognized_csi": false, 00:17:15.928 "method": "bdev_nvme_attach_controller", 00:17:15.928 "req_id": 1 00:17:15.928 } 00:17:15.928 Got JSON-RPC error response 00:17:15.928 response: 00:17:15.928 { 00:17:15.928 "code": -5, 00:17:15.928 "message": "Input/output error" 00:17:15.928 } 00:17:15.928 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:15.928 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:15.928 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:15.928 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:15.928 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:15.928 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:15.928 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:15.928 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:15.928 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:15.928 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.186 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.751 request: 00:17:16.751 { 00:17:16.751 "name": "nvme0", 00:17:16.751 "trtype": "tcp", 00:17:16.751 "traddr": "10.0.0.2", 00:17:16.751 "adrfam": "ipv4", 00:17:16.751 "trsvcid": "4420", 00:17:16.751 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:16.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:16.751 "prchk_reftag": false, 00:17:16.751 "prchk_guard": false, 00:17:16.751 "hdgst": false, 00:17:16.751 "ddgst": false, 00:17:16.751 "dhchap_key": "key0", 00:17:16.751 "dhchap_ctrlr_key": "key1", 00:17:16.751 "allow_unrecognized_csi": false, 00:17:16.751 "method": "bdev_nvme_attach_controller", 00:17:16.751 "req_id": 1 00:17:16.751 } 00:17:16.751 Got JSON-RPC error response 00:17:16.751 response: 00:17:16.751 { 00:17:16.751 "code": -5, 00:17:16.751 "message": "Input/output error" 00:17:16.751 } 00:17:16.751 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:16.751 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.751 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.751 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.751 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:16.751 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:16.751 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:17.317 nvme0n1 00:17:17.317 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:17.317 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.317 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:17.574 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.574 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.574 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.831 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:17.831 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.831 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.831 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.831 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:17.831 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:17.831 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:19.204 nvme0n1 00:17:19.204 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:19.204 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:19.204 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.461 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.462 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:19.462 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.462 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.462 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.462 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:19.462 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:19.462 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.719 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.719 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:17:19.719 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: --dhchap-ctrl-secret DHHC-1:03:YzZmZDE2ZjNmMTIzM2QwNzhjNzk3YTUzMmM1Njk5NWExZjhiMDFhNGI3OGQwYWQyYTc5MTM4NTEyOTE1ZTIyY66b7Jw=: 00:17:20.652 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:20.652 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:20.652 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:20.652 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:20.652 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:20.652 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:20.652 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:20.652 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.652 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.908 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:20.908 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:20.908 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:20.908 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:20.908 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.908 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:20.908 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.908 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:20.908 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:20.908 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:21.840 request: 00:17:21.840 { 00:17:21.840 "name": "nvme0", 00:17:21.840 "trtype": "tcp", 00:17:21.840 "traddr": "10.0.0.2", 00:17:21.840 "adrfam": "ipv4", 00:17:21.840 "trsvcid": "4420", 00:17:21.840 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:21.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:21.840 "prchk_reftag": false, 00:17:21.840 "prchk_guard": false, 00:17:21.840 "hdgst": false, 00:17:21.840 "ddgst": false, 00:17:21.840 "dhchap_key": "key1", 00:17:21.840 "allow_unrecognized_csi": false, 00:17:21.840 "method": "bdev_nvme_attach_controller", 00:17:21.840 "req_id": 1 00:17:21.840 } 00:17:21.840 Got JSON-RPC error response 00:17:21.840 response: 00:17:21.840 { 00:17:21.840 "code": -5, 00:17:21.840 "message": "Input/output error" 00:17:21.840 } 00:17:21.840 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:21.840 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:21.840 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:21.840 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:21.840 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:21.840 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:21.841 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:23.213 nvme0n1 00:17:23.213 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:23.213 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:23.213 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.213 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.213 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.213 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.471 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:23.471 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.471 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.471 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.471 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:23.471 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:23.471 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:24.036 nvme0n1 00:17:24.036 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:24.036 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:24.036 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.036 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.036 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.036 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.629 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:24.629 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.629 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.629 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.629 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: '' 2s 00:17:24.629 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:24.629 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:24.629 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: 00:17:24.629 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:24.629 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:24.629 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:24.629 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: ]] 00:17:24.629 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:N2EyYTAxMGZiYWI2Y2M4ZmEyYTU3NWJkMjgxNjQ5ZjkvIgr9: 00:17:24.629 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:24.629 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:24.629 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:26.550 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:26.550 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:26.550 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:26.550 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: 2s 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: ]] 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OTg5NWFlMmJkODdkMTI2NTE3MDVmZWYzMjE2NjQ0MWQ1MzU1MmY5MGFkZTAwNzZixWMdiA==: 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:26.551 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:28.449 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:28.449 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:28.449 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:28.449 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:28.449 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:28.449 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:28.449 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:28.449 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.449 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:28.449 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.449 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.449 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.449 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:28.449 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:28.449 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:29.819 nvme0n1 00:17:29.819 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:29.819 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.819 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.819 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.819 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:29.819 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:30.753 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:30.753 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:30.753 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.010 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.010 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.010 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.010 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.010 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.010 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:31.010 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:31.267 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:31.267 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.267 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:31.524 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.524 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:31.524 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.524 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.524 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.524 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:31.524 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:31.524 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:31.524 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:31.524 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.524 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:31.524 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.524 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:31.524 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:32.457 request: 00:17:32.457 { 00:17:32.457 "name": "nvme0", 00:17:32.457 "dhchap_key": "key1", 00:17:32.457 "dhchap_ctrlr_key": "key3", 00:17:32.457 "method": "bdev_nvme_set_keys", 00:17:32.457 "req_id": 1 00:17:32.457 } 00:17:32.457 Got JSON-RPC error response 00:17:32.457 response: 00:17:32.457 { 00:17:32.457 "code": -13, 00:17:32.457 "message": "Permission denied" 00:17:32.457 } 00:17:32.457 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:32.457 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:32.457 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:32.457 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:32.457 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:32.457 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:32.458 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.715 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:32.715 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:33.649 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:33.649 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:33.649 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.907 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:33.907 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:33.907 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.907 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.907 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.907 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:33.907 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:33.907 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:35.280 nvme0n1 00:17:35.280 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:35.280 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.280 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.280 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.280 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:35.280 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:35.280 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:35.280 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:35.280 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.280 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:35.280 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.280 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:35.280 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:36.215 request: 00:17:36.215 { 00:17:36.215 "name": "nvme0", 00:17:36.215 "dhchap_key": "key2", 00:17:36.215 "dhchap_ctrlr_key": "key0", 00:17:36.215 "method": "bdev_nvme_set_keys", 00:17:36.215 "req_id": 1 00:17:36.215 } 00:17:36.215 Got JSON-RPC error response 00:17:36.215 response: 00:17:36.215 { 00:17:36.215 "code": -13, 00:17:36.215 "message": "Permission denied" 00:17:36.215 } 00:17:36.215 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:36.215 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:36.215 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:36.215 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:36.215 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:36.215 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.215 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:36.473 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:36.473 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:37.406 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:37.406 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:37.406 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.664 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:37.664 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:37.664 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:37.664 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2379124 00:17:37.664 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2379124 ']' 00:17:37.664 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2379124 00:17:37.664 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:37.664 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.664 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2379124 00:17:37.664 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:37.664 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:37.665 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2379124' 00:17:37.665 killing process with pid 2379124 00:17:37.665 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2379124 00:17:37.665 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2379124 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:38.231 rmmod nvme_tcp 00:17:38.231 rmmod nvme_fabrics 00:17:38.231 rmmod nvme_keyring 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2401771 ']' 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2401771 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2401771 ']' 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2401771 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2401771 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2401771' 00:17:38.231 killing process with pid 2401771 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2401771 00:17:38.231 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2401771 00:17:38.491 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:38.491 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:38.491 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:38.491 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:38.491 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:38.491 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:38.491 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:38.491 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:38.491 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:38.491 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.491 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.491 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.401 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:40.401 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.tQ2 /tmp/spdk.key-sha256.uR8 /tmp/spdk.key-sha384.Blp /tmp/spdk.key-sha512.jHb /tmp/spdk.key-sha512.bPO /tmp/spdk.key-sha384.4MZ /tmp/spdk.key-sha256.n7y '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:40.401 00:17:40.401 real 3m30.165s 00:17:40.401 user 8m13.505s 00:17:40.401 sys 0m27.557s 00:17:40.401 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.401 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.401 ************************************ 00:17:40.401 END TEST nvmf_auth_target 00:17:40.401 ************************************ 00:17:40.401 04:05:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:40.401 04:05:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:40.401 04:05:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:40.401 04:05:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.401 04:05:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:40.401 ************************************ 00:17:40.401 START TEST nvmf_bdevio_no_huge 00:17:40.401 ************************************ 00:17:40.401 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:40.661 * Looking for test storage... 00:17:40.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:40.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.661 --rc genhtml_branch_coverage=1 00:17:40.661 --rc genhtml_function_coverage=1 00:17:40.661 --rc genhtml_legend=1 00:17:40.661 --rc geninfo_all_blocks=1 00:17:40.661 --rc geninfo_unexecuted_blocks=1 00:17:40.661 00:17:40.661 ' 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:40.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.661 --rc genhtml_branch_coverage=1 00:17:40.661 --rc genhtml_function_coverage=1 00:17:40.661 --rc genhtml_legend=1 00:17:40.661 --rc geninfo_all_blocks=1 00:17:40.661 --rc geninfo_unexecuted_blocks=1 00:17:40.661 00:17:40.661 ' 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:40.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.661 --rc genhtml_branch_coverage=1 00:17:40.661 --rc genhtml_function_coverage=1 00:17:40.661 --rc genhtml_legend=1 00:17:40.661 --rc geninfo_all_blocks=1 00:17:40.661 --rc geninfo_unexecuted_blocks=1 00:17:40.661 00:17:40.661 ' 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:40.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.661 --rc genhtml_branch_coverage=1 00:17:40.661 --rc genhtml_function_coverage=1 00:17:40.661 --rc genhtml_legend=1 00:17:40.661 --rc geninfo_all_blocks=1 00:17:40.661 --rc geninfo_unexecuted_blocks=1 00:17:40.661 00:17:40.661 ' 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.661 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:40.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:40.662 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.196 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:43.197 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:43.197 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:43.197 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:43.197 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:43.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:17:43.197 00:17:43.197 --- 10.0.0.2 ping statistics --- 00:17:43.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.197 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:17:43.197 00:17:43.197 --- 10.0.0.1 ping statistics --- 00:17:43.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.197 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2407030 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2407030 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2407030 ']' 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.197 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.198 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.198 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.198 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.198 [2024-12-10 04:05:37.309727] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:43.198 [2024-12-10 04:05:37.309826] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:43.198 [2024-12-10 04:05:37.391162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.198 [2024-12-10 04:05:37.452193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.198 [2024-12-10 04:05:37.452258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.198 [2024-12-10 04:05:37.452279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.198 [2024-12-10 04:05:37.452299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.198 [2024-12-10 04:05:37.452315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.198 [2024-12-10 04:05:37.453381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:43.198 [2024-12-10 04:05:37.453441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:43.198 [2024-12-10 04:05:37.453508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:43.198 [2024-12-10 04:05:37.453511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.457 [2024-12-10 04:05:37.609308] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.457 Malloc0 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.457 [2024-12-10 04:05:37.647364] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:43.457 { 00:17:43.457 "params": { 00:17:43.457 "name": "Nvme$subsystem", 00:17:43.457 "trtype": "$TEST_TRANSPORT", 00:17:43.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.457 "adrfam": "ipv4", 00:17:43.457 "trsvcid": "$NVMF_PORT", 00:17:43.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.457 "hdgst": ${hdgst:-false}, 00:17:43.457 "ddgst": ${ddgst:-false} 00:17:43.457 }, 00:17:43.457 "method": "bdev_nvme_attach_controller" 00:17:43.457 } 00:17:43.457 EOF 00:17:43.457 )") 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:43.457 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:43.457 "params": { 00:17:43.457 "name": "Nvme1", 00:17:43.457 "trtype": "tcp", 00:17:43.457 "traddr": "10.0.0.2", 00:17:43.457 "adrfam": "ipv4", 00:17:43.457 "trsvcid": "4420", 00:17:43.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.457 "hdgst": false, 00:17:43.457 "ddgst": false 00:17:43.457 }, 00:17:43.457 "method": "bdev_nvme_attach_controller" 00:17:43.457 }' 00:17:43.457 [2024-12-10 04:05:37.699854] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:43.457 [2024-12-10 04:05:37.699941] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2407060 ] 00:17:43.457 [2024-12-10 04:05:37.778038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:43.715 [2024-12-10 04:05:37.844555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.715 [2024-12-10 04:05:37.844590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.715 [2024-12-10 04:05:37.844595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.973 I/O targets: 00:17:43.973 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:43.973 00:17:43.973 00:17:43.973 CUnit - A unit testing framework for C - Version 2.1-3 00:17:43.973 http://cunit.sourceforge.net/ 00:17:43.973 00:17:43.973 00:17:43.973 Suite: bdevio tests on: Nvme1n1 00:17:43.973 Test: blockdev write read block ...passed 00:17:43.973 Test: blockdev write zeroes read block ...passed 00:17:43.973 Test: blockdev write zeroes read no split ...passed 00:17:43.973 Test: blockdev write zeroes read split ...passed 00:17:43.973 Test: blockdev write zeroes read split partial ...passed 00:17:43.973 Test: blockdev reset ...[2024-12-10 04:05:38.322136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:43.973 [2024-12-10 04:05:38.322253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3b2b0 (9): Bad file descriptor 00:17:44.230 [2024-12-10 04:05:38.382049] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:44.230 passed 00:17:44.230 Test: blockdev write read 8 blocks ...passed 00:17:44.230 Test: blockdev write read size > 128k ...passed 00:17:44.230 Test: blockdev write read invalid size ...passed 00:17:44.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:44.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:44.230 Test: blockdev write read max offset ...passed 00:17:44.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:44.230 Test: blockdev writev readv 8 blocks ...passed 00:17:44.230 Test: blockdev writev readv 30 x 1block ...passed 00:17:44.488 Test: blockdev writev readv block ...passed 00:17:44.488 Test: blockdev writev readv size > 128k ...passed 00:17:44.488 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:44.488 Test: blockdev comparev and writev ...[2024-12-10 04:05:38.636794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:44.488 [2024-12-10 04:05:38.636830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.488 [2024-12-10 04:05:38.636855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:44.488 [2024-12-10 04:05:38.636872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:44.489 [2024-12-10 04:05:38.637276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:44.489 [2024-12-10 04:05:38.637300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:44.489 [2024-12-10 04:05:38.637322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:44.489 [2024-12-10 04:05:38.637338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:44.489 [2024-12-10 04:05:38.637754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:44.489 [2024-12-10 04:05:38.637778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:44.489 [2024-12-10 04:05:38.637800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:44.489 [2024-12-10 04:05:38.637817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:44.489 [2024-12-10 04:05:38.638216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:44.489 [2024-12-10 04:05:38.638240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:44.489 [2024-12-10 04:05:38.638262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:44.489 [2024-12-10 04:05:38.638278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:44.489 passed 00:17:44.489 Test: blockdev nvme passthru rw ...passed 00:17:44.489 Test: blockdev nvme passthru vendor specific ...[2024-12-10 04:05:38.720838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:44.489 [2024-12-10 04:05:38.720874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:44.489 [2024-12-10 04:05:38.721008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:44.489 [2024-12-10 04:05:38.721030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:44.489 [2024-12-10 04:05:38.721168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:44.489 [2024-12-10 04:05:38.721190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:44.489 [2024-12-10 04:05:38.721321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:44.489 [2024-12-10 04:05:38.721342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:44.489 passed 00:17:44.489 Test: blockdev nvme admin passthru ...passed 00:17:44.489 Test: blockdev copy ...passed 00:17:44.489 00:17:44.489 Run Summary: Type Total Ran Passed Failed Inactive 00:17:44.489 suites 1 1 n/a 0 0 00:17:44.489 tests 23 23 23 0 0 00:17:44.489 asserts 152 152 152 0 n/a 00:17:44.489 00:17:44.489 Elapsed time = 1.147 seconds 00:17:44.747 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:44.747 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.747 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:45.005 rmmod nvme_tcp 00:17:45.005 rmmod nvme_fabrics 00:17:45.005 rmmod nvme_keyring 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2407030 ']' 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2407030 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2407030 ']' 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2407030 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2407030 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2407030' 00:17:45.005 killing process with pid 2407030 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2407030 00:17:45.005 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2407030 00:17:45.266 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:45.266 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:45.266 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:45.266 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:45.266 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:45.266 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:45.266 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:45.266 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:45.266 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:45.266 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.266 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.266 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.802 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:47.802 00:17:47.802 real 0m6.909s 00:17:47.802 user 0m11.830s 00:17:47.802 sys 0m2.692s 00:17:47.802 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.802 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:47.802 ************************************ 00:17:47.802 END TEST nvmf_bdevio_no_huge 00:17:47.802 ************************************ 00:17:47.802 04:05:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:47.802 04:05:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:47.803 ************************************ 00:17:47.803 START TEST nvmf_tls 00:17:47.803 ************************************ 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:47.803 * Looking for test storage... 00:17:47.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:47.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.803 --rc genhtml_branch_coverage=1 00:17:47.803 --rc genhtml_function_coverage=1 00:17:47.803 --rc genhtml_legend=1 00:17:47.803 --rc geninfo_all_blocks=1 00:17:47.803 --rc geninfo_unexecuted_blocks=1 00:17:47.803 00:17:47.803 ' 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:47.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.803 --rc genhtml_branch_coverage=1 00:17:47.803 --rc genhtml_function_coverage=1 00:17:47.803 --rc genhtml_legend=1 00:17:47.803 --rc geninfo_all_blocks=1 00:17:47.803 --rc geninfo_unexecuted_blocks=1 00:17:47.803 00:17:47.803 ' 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:47.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.803 --rc genhtml_branch_coverage=1 00:17:47.803 --rc genhtml_function_coverage=1 00:17:47.803 --rc genhtml_legend=1 00:17:47.803 --rc geninfo_all_blocks=1 00:17:47.803 --rc geninfo_unexecuted_blocks=1 00:17:47.803 00:17:47.803 ' 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:47.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.803 --rc genhtml_branch_coverage=1 00:17:47.803 --rc genhtml_function_coverage=1 00:17:47.803 --rc genhtml_legend=1 00:17:47.803 --rc geninfo_all_blocks=1 00:17:47.803 --rc geninfo_unexecuted_blocks=1 00:17:47.803 00:17:47.803 ' 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:47.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:47.803 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:47.804 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.804 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:47.804 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:47.804 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.804 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:47.804 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:47.804 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:47.804 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.804 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.804 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.804 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:47.804 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:47.804 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:47.804 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:49.707 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:49.707 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:49.707 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:49.707 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:49.707 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:49.707 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:49.707 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:49.707 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:49.707 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:49.707 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:49.707 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:49.707 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:49.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:17:49.966 00:17:49.966 --- 10.0.0.2 ping statistics --- 00:17:49.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.966 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:49.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:17:49.966 00:17:49.966 --- 10.0.0.1 ping statistics --- 00:17:49.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.966 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2409261 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2409261 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2409261 ']' 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.966 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.966 [2024-12-10 04:05:44.182698] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:49.966 [2024-12-10 04:05:44.182784] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.966 [2024-12-10 04:05:44.254794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.966 [2024-12-10 04:05:44.307699] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.966 [2024-12-10 04:05:44.307763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.966 [2024-12-10 04:05:44.307793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.966 [2024-12-10 04:05:44.307804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.966 [2024-12-10 04:05:44.307814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.966 [2024-12-10 04:05:44.308438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.224 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.224 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:50.224 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:50.224 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:50.224 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.224 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.224 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:50.224 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:50.481 true 00:17:50.481 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:50.481 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:50.739 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:50.739 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:50.739 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:50.995 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:50.995 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:51.253 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:51.253 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:51.253 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:51.510 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:51.510 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:51.769 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:51.769 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:51.769 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:51.769 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:52.027 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:52.027 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:52.027 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:52.285 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.285 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:52.543 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:52.543 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:52.543 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:52.801 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.801 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:53.059 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:53.059 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:53.059 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:53.059 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:53.059 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:53.059 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:53.059 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:53.059 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:53.059 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.TPrzIGVQm8 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Jg5uVC231S 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.TPrzIGVQm8 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Jg5uVC231S 00:17:53.317 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:53.575 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:53.833 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.TPrzIGVQm8 00:17:53.834 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TPrzIGVQm8 00:17:53.834 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:54.091 [2024-12-10 04:05:48.382836] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.091 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:54.349 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:54.607 [2024-12-10 04:05:48.920326] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:54.607 [2024-12-10 04:05:48.920656] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.607 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:54.865 malloc0 00:17:54.865 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:55.123 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TPrzIGVQm8 00:17:55.381 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:55.947 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TPrzIGVQm8 00:18:05.963 Initializing NVMe Controllers 00:18:05.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:05.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:05.963 Initialization complete. Launching workers. 00:18:05.963 ======================================================== 00:18:05.963 Latency(us) 00:18:05.963 Device Information : IOPS MiB/s Average min max 00:18:05.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8735.08 34.12 7328.84 1346.71 9640.36 00:18:05.963 ======================================================== 00:18:05.963 Total : 8735.08 34.12 7328.84 1346.71 9640.36 00:18:05.963 00:18:05.963 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TPrzIGVQm8 00:18:05.963 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:05.963 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:05.963 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:05.963 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TPrzIGVQm8 00:18:05.963 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:05.963 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2411184 00:18:05.963 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:05.963 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:05.963 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2411184 /var/tmp/bdevperf.sock 00:18:05.963 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2411184 ']' 00:18:05.963 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.963 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.963 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.963 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.963 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.963 [2024-12-10 04:06:00.243556] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:05.963 [2024-12-10 04:06:00.243669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411184 ] 00:18:05.963 [2024-12-10 04:06:00.316634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.221 [2024-12-10 04:06:00.379889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.221 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.221 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:06.221 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TPrzIGVQm8 00:18:06.478 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:06.736 [2024-12-10 04:06:01.033940] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.736 TLSTESTn1 00:18:06.993 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:06.993 Running I/O for 10 seconds... 00:18:08.857 3504.00 IOPS, 13.69 MiB/s [2024-12-10T03:06:04.619Z] 3603.00 IOPS, 14.07 MiB/s [2024-12-10T03:06:05.554Z] 3635.67 IOPS, 14.20 MiB/s [2024-12-10T03:06:06.491Z] 3653.50 IOPS, 14.27 MiB/s [2024-12-10T03:06:07.428Z] 3644.00 IOPS, 14.23 MiB/s [2024-12-10T03:06:08.364Z] 3656.17 IOPS, 14.28 MiB/s [2024-12-10T03:06:09.300Z] 3659.14 IOPS, 14.29 MiB/s [2024-12-10T03:06:10.676Z] 3664.88 IOPS, 14.32 MiB/s [2024-12-10T03:06:11.612Z] 3648.22 IOPS, 14.25 MiB/s [2024-12-10T03:06:11.612Z] 3654.50 IOPS, 14.28 MiB/s 00:18:17.223 Latency(us) 00:18:17.223 [2024-12-10T03:06:11.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.223 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:17.223 Verification LBA range: start 0x0 length 0x2000 00:18:17.223 TLSTESTn1 : 10.02 3660.55 14.30 0.00 0.00 34911.09 6262.33 36894.34 00:18:17.223 [2024-12-10T03:06:11.612Z] =================================================================================================================== 00:18:17.223 [2024-12-10T03:06:11.612Z] Total : 3660.55 14.30 0.00 0.00 34911.09 6262.33 36894.34 00:18:17.223 { 00:18:17.223 "results": [ 00:18:17.223 { 00:18:17.223 "job": "TLSTESTn1", 00:18:17.223 "core_mask": "0x4", 00:18:17.223 "workload": "verify", 00:18:17.223 "status": "finished", 00:18:17.223 "verify_range": { 00:18:17.223 "start": 0, 00:18:17.223 "length": 8192 00:18:17.223 }, 00:18:17.223 "queue_depth": 128, 00:18:17.223 "io_size": 4096, 00:18:17.223 "runtime": 10.017882, 00:18:17.223 "iops": 3660.5541969849514, 00:18:17.223 "mibps": 14.299039831972467, 00:18:17.223 "io_failed": 0, 00:18:17.223 "io_timeout": 0, 00:18:17.223 "avg_latency_us": 34911.09366153697, 00:18:17.223 "min_latency_us": 6262.328888888889, 00:18:17.223 "max_latency_us": 36894.34074074074 00:18:17.223 } 00:18:17.223 ], 00:18:17.223 "core_count": 1 00:18:17.223 } 00:18:17.223 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:17.223 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2411184 00:18:17.223 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2411184 ']' 00:18:17.223 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2411184 00:18:17.223 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:17.223 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.223 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2411184 00:18:17.223 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:17.223 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:17.223 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2411184' 00:18:17.223 killing process with pid 2411184 00:18:17.223 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2411184 00:18:17.223 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.223 00:18:17.223 Latency(us) 00:18:17.223 [2024-12-10T03:06:11.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.223 [2024-12-10T03:06:11.612Z] =================================================================================================================== 00:18:17.223 [2024-12-10T03:06:11.613Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2411184 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Jg5uVC231S 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Jg5uVC231S 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Jg5uVC231S 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Jg5uVC231S 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2413104 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2413104 /var/tmp/bdevperf.sock 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2413104 ']' 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.224 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.224 [2024-12-10 04:06:11.569623] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:17.224 [2024-12-10 04:06:11.569703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413104 ] 00:18:17.483 [2024-12-10 04:06:11.637107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.483 [2024-12-10 04:06:11.697972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.483 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.483 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:17.483 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Jg5uVC231S 00:18:17.741 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:17.999 [2024-12-10 04:06:12.368650] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.999 [2024-12-10 04:06:12.374399] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:17.999 [2024-12-10 04:06:12.374908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229ff30 (107): Transport endpoint is not connected 00:18:18.000 [2024-12-10 04:06:12.375890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229ff30 (9): Bad file descriptor 00:18:18.000 [2024-12-10 04:06:12.376889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:18.000 [2024-12-10 04:06:12.376927] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:18.000 [2024-12-10 04:06:12.376948] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:18.000 [2024-12-10 04:06:12.376979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:18.000 request: 00:18:18.000 { 00:18:18.000 "name": "TLSTEST", 00:18:18.000 "trtype": "tcp", 00:18:18.000 "traddr": "10.0.0.2", 00:18:18.000 "adrfam": "ipv4", 00:18:18.000 "trsvcid": "4420", 00:18:18.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.000 "prchk_reftag": false, 00:18:18.000 "prchk_guard": false, 00:18:18.000 "hdgst": false, 00:18:18.000 "ddgst": false, 00:18:18.000 "psk": "key0", 00:18:18.000 "allow_unrecognized_csi": false, 00:18:18.000 "method": "bdev_nvme_attach_controller", 00:18:18.000 "req_id": 1 00:18:18.000 } 00:18:18.000 Got JSON-RPC error response 00:18:18.000 response: 00:18:18.000 { 00:18:18.000 "code": -5, 00:18:18.000 "message": "Input/output error" 00:18:18.000 } 00:18:18.260 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2413104 00:18:18.260 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2413104 ']' 00:18:18.260 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2413104 00:18:18.260 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:18.260 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.260 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2413104 00:18:18.260 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:18.260 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:18.260 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2413104' 00:18:18.260 killing process with pid 2413104 00:18:18.260 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2413104 00:18:18.260 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.260 00:18:18.260 Latency(us) 00:18:18.260 [2024-12-10T03:06:12.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.260 [2024-12-10T03:06:12.649Z] =================================================================================================================== 00:18:18.260 [2024-12-10T03:06:12.649Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.260 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2413104 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TPrzIGVQm8 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TPrzIGVQm8 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TPrzIGVQm8 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TPrzIGVQm8 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2413246 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2413246 /var/tmp/bdevperf.sock 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2413246 ']' 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.519 [2024-12-10 04:06:12.705245] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:18.519 [2024-12-10 04:06:12.705325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413246 ] 00:18:18.519 [2024-12-10 04:06:12.771462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.519 [2024-12-10 04:06:12.829742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.777 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.777 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:18.777 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TPrzIGVQm8 00:18:19.035 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:19.294 [2024-12-10 04:06:13.459593] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.294 [2024-12-10 04:06:13.466180] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:19.294 [2024-12-10 04:06:13.466210] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:19.294 [2024-12-10 04:06:13.466283] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:19.294 [2024-12-10 04:06:13.466740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bbf30 (107): Transport endpoint is not connected 00:18:19.294 [2024-12-10 04:06:13.467724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bbf30 (9): Bad file descriptor 00:18:19.294 [2024-12-10 04:06:13.468723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:19.294 [2024-12-10 04:06:13.468747] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:19.294 [2024-12-10 04:06:13.468769] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:19.294 [2024-12-10 04:06:13.468799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:19.294 request: 00:18:19.294 { 00:18:19.294 "name": "TLSTEST", 00:18:19.294 "trtype": "tcp", 00:18:19.294 "traddr": "10.0.0.2", 00:18:19.294 "adrfam": "ipv4", 00:18:19.294 "trsvcid": "4420", 00:18:19.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.294 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:19.294 "prchk_reftag": false, 00:18:19.294 "prchk_guard": false, 00:18:19.294 "hdgst": false, 00:18:19.294 "ddgst": false, 00:18:19.294 "psk": "key0", 00:18:19.294 "allow_unrecognized_csi": false, 00:18:19.294 "method": "bdev_nvme_attach_controller", 00:18:19.294 "req_id": 1 00:18:19.294 } 00:18:19.294 Got JSON-RPC error response 00:18:19.294 response: 00:18:19.294 { 00:18:19.294 "code": -5, 00:18:19.294 "message": "Input/output error" 00:18:19.294 } 00:18:19.294 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2413246 00:18:19.294 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2413246 ']' 00:18:19.294 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2413246 00:18:19.294 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:19.294 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.294 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2413246 00:18:19.294 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:19.294 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:19.295 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2413246' 00:18:19.295 killing process with pid 2413246 00:18:19.295 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2413246 00:18:19.295 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.295 00:18:19.295 Latency(us) 00:18:19.295 [2024-12-10T03:06:13.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.295 [2024-12-10T03:06:13.684Z] =================================================================================================================== 00:18:19.295 [2024-12-10T03:06:13.684Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:19.295 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2413246 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TPrzIGVQm8 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TPrzIGVQm8 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TPrzIGVQm8 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TPrzIGVQm8 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2413383 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2413383 /var/tmp/bdevperf.sock 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2413383 ']' 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.553 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.553 [2024-12-10 04:06:13.799823] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:19.553 [2024-12-10 04:06:13.799914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413383 ] 00:18:19.553 [2024-12-10 04:06:13.865770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.553 [2024-12-10 04:06:13.919600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.811 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.811 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:19.811 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TPrzIGVQm8 00:18:20.070 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:20.330 [2024-12-10 04:06:14.545759] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.330 [2024-12-10 04:06:14.551332] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:20.330 [2024-12-10 04:06:14.551368] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:20.330 [2024-12-10 04:06:14.551421] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:20.330 [2024-12-10 04:06:14.552017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x859f30 (107): Transport endpoint is not connected 00:18:20.330 [2024-12-10 04:06:14.553003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x859f30 (9): Bad file descriptor 00:18:20.330 [2024-12-10 04:06:14.554002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:20.330 [2024-12-10 04:06:14.554027] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:20.330 [2024-12-10 04:06:14.554049] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:20.330 [2024-12-10 04:06:14.554080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:20.330 request: 00:18:20.330 { 00:18:20.330 "name": "TLSTEST", 00:18:20.330 "trtype": "tcp", 00:18:20.330 "traddr": "10.0.0.2", 00:18:20.330 "adrfam": "ipv4", 00:18:20.330 "trsvcid": "4420", 00:18:20.330 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:20.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.330 "prchk_reftag": false, 00:18:20.330 "prchk_guard": false, 00:18:20.330 "hdgst": false, 00:18:20.330 "ddgst": false, 00:18:20.330 "psk": "key0", 00:18:20.330 "allow_unrecognized_csi": false, 00:18:20.330 "method": "bdev_nvme_attach_controller", 00:18:20.330 "req_id": 1 00:18:20.330 } 00:18:20.330 Got JSON-RPC error response 00:18:20.330 response: 00:18:20.330 { 00:18:20.330 "code": -5, 00:18:20.330 "message": "Input/output error" 00:18:20.330 } 00:18:20.330 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2413383 00:18:20.330 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2413383 ']' 00:18:20.330 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2413383 00:18:20.330 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:20.330 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.330 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2413383 00:18:20.330 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:20.330 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:20.330 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2413383' 00:18:20.330 killing process with pid 2413383 00:18:20.330 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2413383 00:18:20.330 Received shutdown signal, test time was about 10.000000 seconds 00:18:20.330 00:18:20.330 Latency(us) 00:18:20.330 [2024-12-10T03:06:14.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.330 [2024-12-10T03:06:14.719Z] =================================================================================================================== 00:18:20.330 [2024-12-10T03:06:14.719Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:20.330 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2413383 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2413526 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2413526 /var/tmp/bdevperf.sock 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2413526 ']' 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.589 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.589 [2024-12-10 04:06:14.883075] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:20.589 [2024-12-10 04:06:14.883153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413526 ] 00:18:20.589 [2024-12-10 04:06:14.949968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.847 [2024-12-10 04:06:15.007834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.847 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.847 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:20.847 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:21.105 [2024-12-10 04:06:15.370807] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:21.105 [2024-12-10 04:06:15.370857] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:21.105 request: 00:18:21.105 { 00:18:21.105 "name": "key0", 00:18:21.105 "path": "", 00:18:21.105 "method": "keyring_file_add_key", 00:18:21.105 "req_id": 1 00:18:21.105 } 00:18:21.105 Got JSON-RPC error response 00:18:21.105 response: 00:18:21.105 { 00:18:21.105 "code": -1, 00:18:21.105 "message": "Operation not permitted" 00:18:21.105 } 00:18:21.105 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:21.362 [2024-12-10 04:06:15.635657] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:21.362 [2024-12-10 04:06:15.635709] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:21.362 request: 00:18:21.362 { 00:18:21.362 "name": "TLSTEST", 00:18:21.362 "trtype": "tcp", 00:18:21.362 "traddr": "10.0.0.2", 00:18:21.362 "adrfam": "ipv4", 00:18:21.362 "trsvcid": "4420", 00:18:21.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.362 "prchk_reftag": false, 00:18:21.362 "prchk_guard": false, 00:18:21.362 "hdgst": false, 00:18:21.362 "ddgst": false, 00:18:21.363 "psk": "key0", 00:18:21.363 "allow_unrecognized_csi": false, 00:18:21.363 "method": "bdev_nvme_attach_controller", 00:18:21.363 "req_id": 1 00:18:21.363 } 00:18:21.363 Got JSON-RPC error response 00:18:21.363 response: 00:18:21.363 { 00:18:21.363 "code": -126, 00:18:21.363 "message": "Required key not available" 00:18:21.363 } 00:18:21.363 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2413526 00:18:21.363 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2413526 ']' 00:18:21.363 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2413526 00:18:21.363 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:21.363 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.363 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2413526 00:18:21.363 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:21.363 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:21.363 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2413526' 00:18:21.363 killing process with pid 2413526 00:18:21.363 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2413526 00:18:21.363 Received shutdown signal, test time was about 10.000000 seconds 00:18:21.363 00:18:21.363 Latency(us) 00:18:21.363 [2024-12-10T03:06:15.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.363 [2024-12-10T03:06:15.752Z] =================================================================================================================== 00:18:21.363 [2024-12-10T03:06:15.752Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:21.363 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2413526 00:18:21.622 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:21.622 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:21.622 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.622 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.622 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.622 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2409261 00:18:21.622 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2409261 ']' 00:18:21.622 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2409261 00:18:21.622 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:21.622 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.622 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2409261 00:18:21.622 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:21.622 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:21.622 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2409261' 00:18:21.622 killing process with pid 2409261 00:18:21.622 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2409261 00:18:21.622 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2409261 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.pZNQTltgiX 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.pZNQTltgiX 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2413686 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2413686 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2413686 ']' 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.880 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.881 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.881 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.881 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.138 [2024-12-10 04:06:16.278056] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:22.138 [2024-12-10 04:06:16.278153] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.138 [2024-12-10 04:06:16.351956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.138 [2024-12-10 04:06:16.408216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.138 [2024-12-10 04:06:16.408272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.138 [2024-12-10 04:06:16.408300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.138 [2024-12-10 04:06:16.408311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.138 [2024-12-10 04:06:16.408321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.138 [2024-12-10 04:06:16.408893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.138 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.138 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:22.138 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:22.138 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:22.138 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.397 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.397 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.pZNQTltgiX 00:18:22.397 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pZNQTltgiX 00:18:22.397 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:22.657 [2024-12-10 04:06:16.793783] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.657 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:22.915 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:23.175 [2024-12-10 04:06:17.383303] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:23.175 [2024-12-10 04:06:17.383571] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.175 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:23.435 malloc0 00:18:23.435 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:23.693 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pZNQTltgiX 00:18:23.950 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:24.208 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pZNQTltgiX 00:18:24.208 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:24.208 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:24.208 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:24.208 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pZNQTltgiX 00:18:24.208 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:24.208 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2413978 00:18:24.208 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:24.208 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:24.208 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2413978 /var/tmp/bdevperf.sock 00:18:24.208 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2413978 ']' 00:18:24.208 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.208 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.208 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.208 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.208 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.208 [2024-12-10 04:06:18.539597] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:24.208 [2024-12-10 04:06:18.539685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413978 ] 00:18:24.465 [2024-12-10 04:06:18.607033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.466 [2024-12-10 04:06:18.664204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.466 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.466 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:24.466 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pZNQTltgiX 00:18:24.724 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:24.984 [2024-12-10 04:06:19.347734] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.243 TLSTESTn1 00:18:25.243 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:25.243 Running I/O for 10 seconds... 00:18:27.189 3580.00 IOPS, 13.98 MiB/s [2024-12-10T03:06:22.958Z] 3625.00 IOPS, 14.16 MiB/s [2024-12-10T03:06:23.898Z] 3625.33 IOPS, 14.16 MiB/s [2024-12-10T03:06:24.838Z] 3604.00 IOPS, 14.08 MiB/s [2024-12-10T03:06:25.775Z] 3612.60 IOPS, 14.11 MiB/s [2024-12-10T03:06:26.710Z] 3606.83 IOPS, 14.09 MiB/s [2024-12-10T03:06:27.642Z] 3614.43 IOPS, 14.12 MiB/s [2024-12-10T03:06:29.019Z] 3616.88 IOPS, 14.13 MiB/s [2024-12-10T03:06:29.956Z] 3618.56 IOPS, 14.13 MiB/s [2024-12-10T03:06:29.956Z] 3616.20 IOPS, 14.13 MiB/s 00:18:35.567 Latency(us) 00:18:35.567 [2024-12-10T03:06:29.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.567 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:35.567 Verification LBA range: start 0x0 length 0x2000 00:18:35.567 TLSTESTn1 : 10.02 3621.41 14.15 0.00 0.00 35284.11 6359.42 28544.57 00:18:35.567 [2024-12-10T03:06:29.956Z] =================================================================================================================== 00:18:35.567 [2024-12-10T03:06:29.956Z] Total : 3621.41 14.15 0.00 0.00 35284.11 6359.42 28544.57 00:18:35.567 { 00:18:35.567 "results": [ 00:18:35.567 { 00:18:35.567 "job": "TLSTESTn1", 00:18:35.567 "core_mask": "0x4", 00:18:35.567 "workload": "verify", 00:18:35.567 "status": "finished", 00:18:35.567 "verify_range": { 00:18:35.567 "start": 0, 00:18:35.567 "length": 8192 00:18:35.567 }, 00:18:35.567 "queue_depth": 128, 00:18:35.567 "io_size": 4096, 00:18:35.567 "runtime": 10.020671, 00:18:35.567 "iops": 3621.414174759355, 00:18:35.567 "mibps": 14.14614912015373, 00:18:35.567 "io_failed": 0, 00:18:35.567 "io_timeout": 0, 00:18:35.567 "avg_latency_us": 35284.10648622223, 00:18:35.567 "min_latency_us": 6359.419259259259, 00:18:35.567 "max_latency_us": 28544.568888888887 00:18:35.567 } 00:18:35.567 ], 00:18:35.567 "core_count": 1 00:18:35.567 } 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2413978 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2413978 ']' 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2413978 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2413978 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2413978' 00:18:35.567 killing process with pid 2413978 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2413978 00:18:35.567 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.567 00:18:35.567 Latency(us) 00:18:35.567 [2024-12-10T03:06:29.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.567 [2024-12-10T03:06:29.956Z] =================================================================================================================== 00:18:35.567 [2024-12-10T03:06:29.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2413978 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.pZNQTltgiX 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pZNQTltgiX 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pZNQTltgiX 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pZNQTltgiX 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pZNQTltgiX 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2415309 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2415309 /var/tmp/bdevperf.sock 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2415309 ']' 00:18:35.567 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.568 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.568 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.568 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.568 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.568 [2024-12-10 04:06:29.945455] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:35.568 [2024-12-10 04:06:29.945585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2415309 ] 00:18:35.826 [2024-12-10 04:06:30.022861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.826 [2024-12-10 04:06:30.084822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.826 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.826 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:35.826 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pZNQTltgiX 00:18:36.450 [2024-12-10 04:06:30.507107] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pZNQTltgiX': 0100666 00:18:36.450 [2024-12-10 04:06:30.507155] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:36.450 request: 00:18:36.450 { 00:18:36.450 "name": "key0", 00:18:36.450 "path": "/tmp/tmp.pZNQTltgiX", 00:18:36.450 "method": "keyring_file_add_key", 00:18:36.450 "req_id": 1 00:18:36.450 } 00:18:36.450 Got JSON-RPC error response 00:18:36.450 response: 00:18:36.450 { 00:18:36.450 "code": -1, 00:18:36.450 "message": "Operation not permitted" 00:18:36.450 } 00:18:36.450 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:36.450 [2024-12-10 04:06:30.800014] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.450 [2024-12-10 04:06:30.800064] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:36.450 request: 00:18:36.450 { 00:18:36.450 "name": "TLSTEST", 00:18:36.450 "trtype": "tcp", 00:18:36.450 "traddr": "10.0.0.2", 00:18:36.450 "adrfam": "ipv4", 00:18:36.450 "trsvcid": "4420", 00:18:36.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.450 "prchk_reftag": false, 00:18:36.450 "prchk_guard": false, 00:18:36.450 "hdgst": false, 00:18:36.450 "ddgst": false, 00:18:36.450 "psk": "key0", 00:18:36.450 "allow_unrecognized_csi": false, 00:18:36.450 "method": "bdev_nvme_attach_controller", 00:18:36.450 "req_id": 1 00:18:36.450 } 00:18:36.450 Got JSON-RPC error response 00:18:36.450 response: 00:18:36.450 { 00:18:36.450 "code": -126, 00:18:36.450 "message": "Required key not available" 00:18:36.450 } 00:18:36.450 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2415309 00:18:36.450 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2415309 ']' 00:18:36.450 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2415309 00:18:36.450 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:36.450 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.739 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2415309 00:18:36.739 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:36.739 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:36.739 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2415309' 00:18:36.739 killing process with pid 2415309 00:18:36.739 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2415309 00:18:36.739 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.739 00:18:36.739 Latency(us) 00:18:36.739 [2024-12-10T03:06:31.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.739 [2024-12-10T03:06:31.128Z] =================================================================================================================== 00:18:36.739 [2024-12-10T03:06:31.128Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:36.739 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2415309 00:18:36.739 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:36.739 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:36.739 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:36.739 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:36.739 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:36.739 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2413686 00:18:36.739 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2413686 ']' 00:18:36.739 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2413686 00:18:36.739 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:36.739 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.739 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2413686 00:18:36.739 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:36.739 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:36.739 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2413686' 00:18:36.739 killing process with pid 2413686 00:18:36.739 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2413686 00:18:36.739 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2413686 00:18:36.998 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:36.998 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:36.998 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.998 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.998 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2415552 00:18:36.998 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:36.998 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2415552 00:18:36.998 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2415552 ']' 00:18:36.998 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.998 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.998 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.998 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.998 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.259 [2024-12-10 04:06:31.407349] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:37.259 [2024-12-10 04:06:31.407453] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.259 [2024-12-10 04:06:31.482978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.259 [2024-12-10 04:06:31.543688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.259 [2024-12-10 04:06:31.543750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.259 [2024-12-10 04:06:31.543779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.259 [2024-12-10 04:06:31.543791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.259 [2024-12-10 04:06:31.543802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.259 [2024-12-10 04:06:31.544435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.517 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.517 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:37.517 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:37.517 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:37.517 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.517 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.517 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.pZNQTltgiX 00:18:37.517 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:37.517 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.pZNQTltgiX 00:18:37.517 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:37.517 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.517 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:37.517 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.517 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.pZNQTltgiX 00:18:37.517 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pZNQTltgiX 00:18:37.517 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:37.775 [2024-12-10 04:06:31.959223] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.775 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:38.033 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:38.291 [2024-12-10 04:06:32.556803] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:38.291 [2024-12-10 04:06:32.557074] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.291 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:38.548 malloc0 00:18:38.548 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:38.806 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pZNQTltgiX 00:18:39.064 [2024-12-10 04:06:33.418241] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pZNQTltgiX': 0100666 00:18:39.064 [2024-12-10 04:06:33.418279] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:39.064 request: 00:18:39.064 { 00:18:39.064 "name": "key0", 00:18:39.064 "path": "/tmp/tmp.pZNQTltgiX", 00:18:39.064 "method": "keyring_file_add_key", 00:18:39.064 "req_id": 1 00:18:39.064 } 00:18:39.064 Got JSON-RPC error response 00:18:39.064 response: 00:18:39.064 { 00:18:39.064 "code": -1, 00:18:39.064 "message": "Operation not permitted" 00:18:39.064 } 00:18:39.064 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:39.322 [2024-12-10 04:06:33.682993] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:39.322 [2024-12-10 04:06:33.683050] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:39.322 request: 00:18:39.322 { 00:18:39.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.322 "host": "nqn.2016-06.io.spdk:host1", 00:18:39.322 "psk": "key0", 00:18:39.322 "method": "nvmf_subsystem_add_host", 00:18:39.322 "req_id": 1 00:18:39.322 } 00:18:39.322 Got JSON-RPC error response 00:18:39.322 response: 00:18:39.322 { 00:18:39.322 "code": -32603, 00:18:39.322 "message": "Internal error" 00:18:39.322 } 00:18:39.322 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:39.322 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.322 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.322 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.322 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2415552 00:18:39.322 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2415552 ']' 00:18:39.322 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2415552 00:18:39.322 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:39.581 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.581 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2415552 00:18:39.581 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:39.581 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:39.581 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2415552' 00:18:39.581 killing process with pid 2415552 00:18:39.581 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2415552 00:18:39.581 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2415552 00:18:39.841 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.pZNQTltgiX 00:18:39.841 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:39.841 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:39.841 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:39.841 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.841 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2415856 00:18:39.841 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:39.841 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2415856 00:18:39.841 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2415856 ']' 00:18:39.841 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.841 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.841 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.841 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.841 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.841 [2024-12-10 04:06:34.029528] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:39.841 [2024-12-10 04:06:34.029653] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.841 [2024-12-10 04:06:34.102381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.841 [2024-12-10 04:06:34.159290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.841 [2024-12-10 04:06:34.159352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.841 [2024-12-10 04:06:34.159380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.841 [2024-12-10 04:06:34.159391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.841 [2024-12-10 04:06:34.159400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.841 [2024-12-10 04:06:34.160021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.100 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.100 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:40.100 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:40.100 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:40.100 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.100 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.100 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.pZNQTltgiX 00:18:40.100 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pZNQTltgiX 00:18:40.100 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:40.357 [2024-12-10 04:06:34.561817] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.357 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:40.615 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:40.872 [2024-12-10 04:06:35.175501] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:40.872 [2024-12-10 04:06:35.175804] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.872 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:41.129 malloc0 00:18:41.129 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:41.694 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pZNQTltgiX 00:18:41.951 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:42.209 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2416150 00:18:42.209 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:42.209 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.209 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2416150 /var/tmp/bdevperf.sock 00:18:42.209 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2416150 ']' 00:18:42.209 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.209 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.209 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.209 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.209 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.209 [2024-12-10 04:06:36.474897] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:42.209 [2024-12-10 04:06:36.474971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2416150 ] 00:18:42.209 [2024-12-10 04:06:36.541624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.467 [2024-12-10 04:06:36.600654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.467 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.467 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:42.467 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pZNQTltgiX 00:18:42.724 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:42.981 [2024-12-10 04:06:37.231093] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.981 TLSTESTn1 00:18:42.981 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:43.550 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:43.550 "subsystems": [ 00:18:43.550 { 00:18:43.550 "subsystem": "keyring", 00:18:43.550 "config": [ 00:18:43.550 { 00:18:43.550 "method": "keyring_file_add_key", 00:18:43.550 "params": { 00:18:43.550 "name": "key0", 00:18:43.550 "path": "/tmp/tmp.pZNQTltgiX" 00:18:43.550 } 00:18:43.550 } 00:18:43.550 ] 00:18:43.550 }, 00:18:43.550 { 00:18:43.550 "subsystem": "iobuf", 00:18:43.550 "config": [ 00:18:43.550 { 00:18:43.550 "method": "iobuf_set_options", 00:18:43.550 "params": { 00:18:43.550 "small_pool_count": 8192, 00:18:43.550 "large_pool_count": 1024, 00:18:43.550 "small_bufsize": 8192, 00:18:43.550 "large_bufsize": 135168, 00:18:43.550 "enable_numa": false 00:18:43.550 } 00:18:43.550 } 00:18:43.550 ] 00:18:43.550 }, 00:18:43.550 { 00:18:43.550 "subsystem": "sock", 00:18:43.550 "config": [ 00:18:43.550 { 00:18:43.550 "method": "sock_set_default_impl", 00:18:43.550 "params": { 00:18:43.550 "impl_name": "posix" 00:18:43.550 } 00:18:43.550 }, 00:18:43.550 { 00:18:43.550 "method": "sock_impl_set_options", 00:18:43.550 "params": { 00:18:43.550 "impl_name": "ssl", 00:18:43.550 "recv_buf_size": 4096, 00:18:43.550 "send_buf_size": 4096, 00:18:43.550 "enable_recv_pipe": true, 00:18:43.550 "enable_quickack": false, 00:18:43.550 "enable_placement_id": 0, 00:18:43.550 "enable_zerocopy_send_server": true, 00:18:43.550 "enable_zerocopy_send_client": false, 00:18:43.550 "zerocopy_threshold": 0, 00:18:43.550 "tls_version": 0, 00:18:43.550 "enable_ktls": false 00:18:43.550 } 00:18:43.550 }, 00:18:43.550 { 00:18:43.550 "method": "sock_impl_set_options", 00:18:43.550 "params": { 00:18:43.550 "impl_name": "posix", 00:18:43.550 "recv_buf_size": 2097152, 00:18:43.550 "send_buf_size": 2097152, 00:18:43.550 "enable_recv_pipe": true, 00:18:43.550 "enable_quickack": false, 00:18:43.550 "enable_placement_id": 0, 00:18:43.550 "enable_zerocopy_send_server": true, 00:18:43.550 "enable_zerocopy_send_client": false, 00:18:43.550 "zerocopy_threshold": 0, 00:18:43.550 "tls_version": 0, 00:18:43.550 "enable_ktls": false 00:18:43.551 } 00:18:43.551 } 00:18:43.551 ] 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "subsystem": "vmd", 00:18:43.551 "config": [] 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "subsystem": "accel", 00:18:43.551 "config": [ 00:18:43.551 { 00:18:43.551 "method": "accel_set_options", 00:18:43.551 "params": { 00:18:43.551 "small_cache_size": 128, 00:18:43.551 "large_cache_size": 16, 00:18:43.551 "task_count": 2048, 00:18:43.551 "sequence_count": 2048, 00:18:43.551 "buf_count": 2048 00:18:43.551 } 00:18:43.551 } 00:18:43.551 ] 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "subsystem": "bdev", 00:18:43.551 "config": [ 00:18:43.551 { 00:18:43.551 "method": "bdev_set_options", 00:18:43.551 "params": { 00:18:43.551 "bdev_io_pool_size": 65535, 00:18:43.551 "bdev_io_cache_size": 256, 00:18:43.551 "bdev_auto_examine": true, 00:18:43.551 "iobuf_small_cache_size": 128, 00:18:43.551 "iobuf_large_cache_size": 16 00:18:43.551 } 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "method": "bdev_raid_set_options", 00:18:43.551 "params": { 00:18:43.551 "process_window_size_kb": 1024, 00:18:43.551 "process_max_bandwidth_mb_sec": 0 00:18:43.551 } 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "method": "bdev_iscsi_set_options", 00:18:43.551 "params": { 00:18:43.551 "timeout_sec": 30 00:18:43.551 } 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "method": "bdev_nvme_set_options", 00:18:43.551 "params": { 00:18:43.551 "action_on_timeout": "none", 00:18:43.551 "timeout_us": 0, 00:18:43.551 "timeout_admin_us": 0, 00:18:43.551 "keep_alive_timeout_ms": 10000, 00:18:43.551 "arbitration_burst": 0, 00:18:43.551 "low_priority_weight": 0, 00:18:43.551 "medium_priority_weight": 0, 00:18:43.551 "high_priority_weight": 0, 00:18:43.551 "nvme_adminq_poll_period_us": 10000, 00:18:43.551 "nvme_ioq_poll_period_us": 0, 00:18:43.551 "io_queue_requests": 0, 00:18:43.551 "delay_cmd_submit": true, 00:18:43.551 "transport_retry_count": 4, 00:18:43.551 "bdev_retry_count": 3, 00:18:43.551 "transport_ack_timeout": 0, 00:18:43.551 "ctrlr_loss_timeout_sec": 0, 00:18:43.551 "reconnect_delay_sec": 0, 00:18:43.551 "fast_io_fail_timeout_sec": 0, 00:18:43.551 "disable_auto_failback": false, 00:18:43.551 "generate_uuids": false, 00:18:43.551 "transport_tos": 0, 00:18:43.551 "nvme_error_stat": false, 00:18:43.551 "rdma_srq_size": 0, 00:18:43.551 "io_path_stat": false, 00:18:43.551 "allow_accel_sequence": false, 00:18:43.551 "rdma_max_cq_size": 0, 00:18:43.551 "rdma_cm_event_timeout_ms": 0, 00:18:43.551 "dhchap_digests": [ 00:18:43.551 "sha256", 00:18:43.551 "sha384", 00:18:43.551 "sha512" 00:18:43.551 ], 00:18:43.551 "dhchap_dhgroups": [ 00:18:43.551 "null", 00:18:43.551 "ffdhe2048", 00:18:43.551 "ffdhe3072", 00:18:43.551 "ffdhe4096", 00:18:43.551 "ffdhe6144", 00:18:43.551 "ffdhe8192" 00:18:43.551 ] 00:18:43.551 } 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "method": "bdev_nvme_set_hotplug", 00:18:43.551 "params": { 00:18:43.551 "period_us": 100000, 00:18:43.551 "enable": false 00:18:43.551 } 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "method": "bdev_malloc_create", 00:18:43.551 "params": { 00:18:43.551 "name": "malloc0", 00:18:43.551 "num_blocks": 8192, 00:18:43.551 "block_size": 4096, 00:18:43.551 "physical_block_size": 4096, 00:18:43.551 "uuid": "56f05e4f-acde-4ee3-8f1e-aebbb117c8a6", 00:18:43.551 "optimal_io_boundary": 0, 00:18:43.551 "md_size": 0, 00:18:43.551 "dif_type": 0, 00:18:43.551 "dif_is_head_of_md": false, 00:18:43.551 "dif_pi_format": 0 00:18:43.551 } 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "method": "bdev_wait_for_examine" 00:18:43.551 } 00:18:43.551 ] 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "subsystem": "nbd", 00:18:43.551 "config": [] 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "subsystem": "scheduler", 00:18:43.551 "config": [ 00:18:43.551 { 00:18:43.551 "method": "framework_set_scheduler", 00:18:43.551 "params": { 00:18:43.551 "name": "static" 00:18:43.551 } 00:18:43.551 } 00:18:43.551 ] 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "subsystem": "nvmf", 00:18:43.551 "config": [ 00:18:43.551 { 00:18:43.551 "method": "nvmf_set_config", 00:18:43.551 "params": { 00:18:43.551 "discovery_filter": "match_any", 00:18:43.551 "admin_cmd_passthru": { 00:18:43.551 "identify_ctrlr": false 00:18:43.551 }, 00:18:43.551 "dhchap_digests": [ 00:18:43.551 "sha256", 00:18:43.551 "sha384", 00:18:43.551 "sha512" 00:18:43.551 ], 00:18:43.551 "dhchap_dhgroups": [ 00:18:43.551 "null", 00:18:43.551 "ffdhe2048", 00:18:43.551 "ffdhe3072", 00:18:43.551 "ffdhe4096", 00:18:43.551 "ffdhe6144", 00:18:43.551 "ffdhe8192" 00:18:43.551 ] 00:18:43.551 } 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "method": "nvmf_set_max_subsystems", 00:18:43.551 "params": { 00:18:43.551 "max_subsystems": 1024 00:18:43.551 } 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "method": "nvmf_set_crdt", 00:18:43.551 "params": { 00:18:43.551 "crdt1": 0, 00:18:43.551 "crdt2": 0, 00:18:43.551 "crdt3": 0 00:18:43.551 } 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "method": "nvmf_create_transport", 00:18:43.551 "params": { 00:18:43.551 "trtype": "TCP", 00:18:43.551 "max_queue_depth": 128, 00:18:43.551 "max_io_qpairs_per_ctrlr": 127, 00:18:43.551 "in_capsule_data_size": 4096, 00:18:43.551 "max_io_size": 131072, 00:18:43.551 "io_unit_size": 131072, 00:18:43.551 "max_aq_depth": 128, 00:18:43.551 "num_shared_buffers": 511, 00:18:43.551 "buf_cache_size": 4294967295, 00:18:43.551 "dif_insert_or_strip": false, 00:18:43.551 "zcopy": false, 00:18:43.551 "c2h_success": false, 00:18:43.551 "sock_priority": 0, 00:18:43.551 "abort_timeout_sec": 1, 00:18:43.551 "ack_timeout": 0, 00:18:43.551 "data_wr_pool_size": 0 00:18:43.551 } 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "method": "nvmf_create_subsystem", 00:18:43.551 "params": { 00:18:43.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.551 "allow_any_host": false, 00:18:43.551 "serial_number": "SPDK00000000000001", 00:18:43.551 "model_number": "SPDK bdev Controller", 00:18:43.551 "max_namespaces": 10, 00:18:43.551 "min_cntlid": 1, 00:18:43.551 "max_cntlid": 65519, 00:18:43.551 "ana_reporting": false 00:18:43.551 } 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "method": "nvmf_subsystem_add_host", 00:18:43.551 "params": { 00:18:43.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.551 "host": "nqn.2016-06.io.spdk:host1", 00:18:43.551 "psk": "key0" 00:18:43.551 } 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "method": "nvmf_subsystem_add_ns", 00:18:43.551 "params": { 00:18:43.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.551 "namespace": { 00:18:43.551 "nsid": 1, 00:18:43.551 "bdev_name": "malloc0", 00:18:43.551 "nguid": "56F05E4FACDE4EE38F1EAEBBB117C8A6", 00:18:43.551 "uuid": "56f05e4f-acde-4ee3-8f1e-aebbb117c8a6", 00:18:43.551 "no_auto_visible": false 00:18:43.551 } 00:18:43.551 } 00:18:43.551 }, 00:18:43.551 { 00:18:43.551 "method": "nvmf_subsystem_add_listener", 00:18:43.551 "params": { 00:18:43.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.551 "listen_address": { 00:18:43.551 "trtype": "TCP", 00:18:43.552 "adrfam": "IPv4", 00:18:43.552 "traddr": "10.0.0.2", 00:18:43.552 "trsvcid": "4420" 00:18:43.552 }, 00:18:43.552 "secure_channel": true 00:18:43.552 } 00:18:43.552 } 00:18:43.552 ] 00:18:43.552 } 00:18:43.552 ] 00:18:43.552 }' 00:18:43.552 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:43.810 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:43.810 "subsystems": [ 00:18:43.810 { 00:18:43.810 "subsystem": "keyring", 00:18:43.810 "config": [ 00:18:43.810 { 00:18:43.810 "method": "keyring_file_add_key", 00:18:43.810 "params": { 00:18:43.810 "name": "key0", 00:18:43.810 "path": "/tmp/tmp.pZNQTltgiX" 00:18:43.810 } 00:18:43.810 } 00:18:43.810 ] 00:18:43.810 }, 00:18:43.810 { 00:18:43.810 "subsystem": "iobuf", 00:18:43.810 "config": [ 00:18:43.810 { 00:18:43.810 "method": "iobuf_set_options", 00:18:43.810 "params": { 00:18:43.810 "small_pool_count": 8192, 00:18:43.810 "large_pool_count": 1024, 00:18:43.810 "small_bufsize": 8192, 00:18:43.810 "large_bufsize": 135168, 00:18:43.810 "enable_numa": false 00:18:43.810 } 00:18:43.810 } 00:18:43.810 ] 00:18:43.810 }, 00:18:43.810 { 00:18:43.810 "subsystem": "sock", 00:18:43.810 "config": [ 00:18:43.810 { 00:18:43.810 "method": "sock_set_default_impl", 00:18:43.810 "params": { 00:18:43.810 "impl_name": "posix" 00:18:43.810 } 00:18:43.810 }, 00:18:43.810 { 00:18:43.810 "method": "sock_impl_set_options", 00:18:43.810 "params": { 00:18:43.810 "impl_name": "ssl", 00:18:43.810 "recv_buf_size": 4096, 00:18:43.810 "send_buf_size": 4096, 00:18:43.810 "enable_recv_pipe": true, 00:18:43.810 "enable_quickack": false, 00:18:43.810 "enable_placement_id": 0, 00:18:43.810 "enable_zerocopy_send_server": true, 00:18:43.810 "enable_zerocopy_send_client": false, 00:18:43.810 "zerocopy_threshold": 0, 00:18:43.810 "tls_version": 0, 00:18:43.810 "enable_ktls": false 00:18:43.810 } 00:18:43.810 }, 00:18:43.810 { 00:18:43.810 "method": "sock_impl_set_options", 00:18:43.810 "params": { 00:18:43.810 "impl_name": "posix", 00:18:43.810 "recv_buf_size": 2097152, 00:18:43.810 "send_buf_size": 2097152, 00:18:43.810 "enable_recv_pipe": true, 00:18:43.810 "enable_quickack": false, 00:18:43.810 "enable_placement_id": 0, 00:18:43.810 "enable_zerocopy_send_server": true, 00:18:43.810 "enable_zerocopy_send_client": false, 00:18:43.810 "zerocopy_threshold": 0, 00:18:43.810 "tls_version": 0, 00:18:43.810 "enable_ktls": false 00:18:43.810 } 00:18:43.810 } 00:18:43.810 ] 00:18:43.810 }, 00:18:43.810 { 00:18:43.810 "subsystem": "vmd", 00:18:43.810 "config": [] 00:18:43.810 }, 00:18:43.810 { 00:18:43.810 "subsystem": "accel", 00:18:43.810 "config": [ 00:18:43.810 { 00:18:43.810 "method": "accel_set_options", 00:18:43.810 "params": { 00:18:43.810 "small_cache_size": 128, 00:18:43.810 "large_cache_size": 16, 00:18:43.810 "task_count": 2048, 00:18:43.810 "sequence_count": 2048, 00:18:43.810 "buf_count": 2048 00:18:43.810 } 00:18:43.810 } 00:18:43.810 ] 00:18:43.810 }, 00:18:43.811 { 00:18:43.811 "subsystem": "bdev", 00:18:43.811 "config": [ 00:18:43.811 { 00:18:43.811 "method": "bdev_set_options", 00:18:43.811 "params": { 00:18:43.811 "bdev_io_pool_size": 65535, 00:18:43.811 "bdev_io_cache_size": 256, 00:18:43.811 "bdev_auto_examine": true, 00:18:43.811 "iobuf_small_cache_size": 128, 00:18:43.811 "iobuf_large_cache_size": 16 00:18:43.811 } 00:18:43.811 }, 00:18:43.811 { 00:18:43.811 "method": "bdev_raid_set_options", 00:18:43.811 "params": { 00:18:43.811 "process_window_size_kb": 1024, 00:18:43.811 "process_max_bandwidth_mb_sec": 0 00:18:43.811 } 00:18:43.811 }, 00:18:43.811 { 00:18:43.811 "method": "bdev_iscsi_set_options", 00:18:43.811 "params": { 00:18:43.811 "timeout_sec": 30 00:18:43.811 } 00:18:43.811 }, 00:18:43.811 { 00:18:43.811 "method": "bdev_nvme_set_options", 00:18:43.811 "params": { 00:18:43.811 "action_on_timeout": "none", 00:18:43.811 "timeout_us": 0, 00:18:43.811 "timeout_admin_us": 0, 00:18:43.811 "keep_alive_timeout_ms": 10000, 00:18:43.811 "arbitration_burst": 0, 00:18:43.811 "low_priority_weight": 0, 00:18:43.811 "medium_priority_weight": 0, 00:18:43.811 "high_priority_weight": 0, 00:18:43.811 "nvme_adminq_poll_period_us": 10000, 00:18:43.811 "nvme_ioq_poll_period_us": 0, 00:18:43.811 "io_queue_requests": 512, 00:18:43.811 "delay_cmd_submit": true, 00:18:43.811 "transport_retry_count": 4, 00:18:43.811 "bdev_retry_count": 3, 00:18:43.811 "transport_ack_timeout": 0, 00:18:43.811 "ctrlr_loss_timeout_sec": 0, 00:18:43.811 "reconnect_delay_sec": 0, 00:18:43.811 "fast_io_fail_timeout_sec": 0, 00:18:43.811 "disable_auto_failback": false, 00:18:43.811 "generate_uuids": false, 00:18:43.811 "transport_tos": 0, 00:18:43.811 "nvme_error_stat": false, 00:18:43.811 "rdma_srq_size": 0, 00:18:43.811 "io_path_stat": false, 00:18:43.811 "allow_accel_sequence": false, 00:18:43.811 "rdma_max_cq_size": 0, 00:18:43.811 "rdma_cm_event_timeout_ms": 0, 00:18:43.811 "dhchap_digests": [ 00:18:43.811 "sha256", 00:18:43.811 "sha384", 00:18:43.811 "sha512" 00:18:43.811 ], 00:18:43.811 "dhchap_dhgroups": [ 00:18:43.811 "null", 00:18:43.811 "ffdhe2048", 00:18:43.811 "ffdhe3072", 00:18:43.811 "ffdhe4096", 00:18:43.811 "ffdhe6144", 00:18:43.811 "ffdhe8192" 00:18:43.811 ] 00:18:43.811 } 00:18:43.811 }, 00:18:43.811 { 00:18:43.811 "method": "bdev_nvme_attach_controller", 00:18:43.811 "params": { 00:18:43.811 "name": "TLSTEST", 00:18:43.811 "trtype": "TCP", 00:18:43.811 "adrfam": "IPv4", 00:18:43.811 "traddr": "10.0.0.2", 00:18:43.811 "trsvcid": "4420", 00:18:43.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.811 "prchk_reftag": false, 00:18:43.811 "prchk_guard": false, 00:18:43.811 "ctrlr_loss_timeout_sec": 0, 00:18:43.811 "reconnect_delay_sec": 0, 00:18:43.811 "fast_io_fail_timeout_sec": 0, 00:18:43.811 "psk": "key0", 00:18:43.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.811 "hdgst": false, 00:18:43.811 "ddgst": false, 00:18:43.811 "multipath": "multipath" 00:18:43.811 } 00:18:43.811 }, 00:18:43.811 { 00:18:43.811 "method": "bdev_nvme_set_hotplug", 00:18:43.811 "params": { 00:18:43.811 "period_us": 100000, 00:18:43.811 "enable": false 00:18:43.811 } 00:18:43.811 }, 00:18:43.811 { 00:18:43.811 "method": "bdev_wait_for_examine" 00:18:43.811 } 00:18:43.811 ] 00:18:43.811 }, 00:18:43.811 { 00:18:43.811 "subsystem": "nbd", 00:18:43.811 "config": [] 00:18:43.811 } 00:18:43.811 ] 00:18:43.811 }' 00:18:43.811 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2416150 00:18:43.811 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2416150 ']' 00:18:43.811 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2416150 00:18:43.811 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:43.811 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.811 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2416150 00:18:43.811 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:43.811 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:43.811 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2416150' 00:18:43.811 killing process with pid 2416150 00:18:43.811 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2416150 00:18:43.811 Received shutdown signal, test time was about 10.000000 seconds 00:18:43.811 00:18:43.811 Latency(us) 00:18:43.811 [2024-12-10T03:06:38.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.811 [2024-12-10T03:06:38.200Z] =================================================================================================================== 00:18:43.811 [2024-12-10T03:06:38.200Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:43.811 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2416150 00:18:44.069 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2415856 00:18:44.069 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2415856 ']' 00:18:44.069 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2415856 00:18:44.069 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:44.069 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.069 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2415856 00:18:44.069 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:44.069 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:44.069 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2415856' 00:18:44.069 killing process with pid 2415856 00:18:44.069 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2415856 00:18:44.069 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2415856 00:18:44.328 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:44.328 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:44.328 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.328 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.328 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:44.328 "subsystems": [ 00:18:44.328 { 00:18:44.328 "subsystem": "keyring", 00:18:44.328 "config": [ 00:18:44.328 { 00:18:44.328 "method": "keyring_file_add_key", 00:18:44.328 "params": { 00:18:44.328 "name": "key0", 00:18:44.328 "path": "/tmp/tmp.pZNQTltgiX" 00:18:44.328 } 00:18:44.328 } 00:18:44.328 ] 00:18:44.328 }, 00:18:44.328 { 00:18:44.328 "subsystem": "iobuf", 00:18:44.328 "config": [ 00:18:44.328 { 00:18:44.328 "method": "iobuf_set_options", 00:18:44.328 "params": { 00:18:44.328 "small_pool_count": 8192, 00:18:44.328 "large_pool_count": 1024, 00:18:44.328 "small_bufsize": 8192, 00:18:44.328 "large_bufsize": 135168, 00:18:44.328 "enable_numa": false 00:18:44.328 } 00:18:44.328 } 00:18:44.328 ] 00:18:44.328 }, 00:18:44.328 { 00:18:44.328 "subsystem": "sock", 00:18:44.328 "config": [ 00:18:44.328 { 00:18:44.328 "method": "sock_set_default_impl", 00:18:44.328 "params": { 00:18:44.328 "impl_name": "posix" 00:18:44.328 } 00:18:44.328 }, 00:18:44.328 { 00:18:44.328 "method": "sock_impl_set_options", 00:18:44.328 "params": { 00:18:44.328 "impl_name": "ssl", 00:18:44.328 "recv_buf_size": 4096, 00:18:44.328 "send_buf_size": 4096, 00:18:44.328 "enable_recv_pipe": true, 00:18:44.328 "enable_quickack": false, 00:18:44.328 "enable_placement_id": 0, 00:18:44.328 "enable_zerocopy_send_server": true, 00:18:44.328 "enable_zerocopy_send_client": false, 00:18:44.328 "zerocopy_threshold": 0, 00:18:44.328 "tls_version": 0, 00:18:44.328 "enable_ktls": false 00:18:44.328 } 00:18:44.328 }, 00:18:44.328 { 00:18:44.328 "method": "sock_impl_set_options", 00:18:44.328 "params": { 00:18:44.328 "impl_name": "posix", 00:18:44.328 "recv_buf_size": 2097152, 00:18:44.328 "send_buf_size": 2097152, 00:18:44.328 "enable_recv_pipe": true, 00:18:44.328 "enable_quickack": false, 00:18:44.328 "enable_placement_id": 0, 00:18:44.329 "enable_zerocopy_send_server": true, 00:18:44.329 "enable_zerocopy_send_client": false, 00:18:44.329 "zerocopy_threshold": 0, 00:18:44.329 "tls_version": 0, 00:18:44.329 "enable_ktls": false 00:18:44.329 } 00:18:44.329 } 00:18:44.329 ] 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "subsystem": "vmd", 00:18:44.329 "config": [] 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "subsystem": "accel", 00:18:44.329 "config": [ 00:18:44.329 { 00:18:44.329 "method": "accel_set_options", 00:18:44.329 "params": { 00:18:44.329 "small_cache_size": 128, 00:18:44.329 "large_cache_size": 16, 00:18:44.329 "task_count": 2048, 00:18:44.329 "sequence_count": 2048, 00:18:44.329 "buf_count": 2048 00:18:44.329 } 00:18:44.329 } 00:18:44.329 ] 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "subsystem": "bdev", 00:18:44.329 "config": [ 00:18:44.329 { 00:18:44.329 "method": "bdev_set_options", 00:18:44.329 "params": { 00:18:44.329 "bdev_io_pool_size": 65535, 00:18:44.329 "bdev_io_cache_size": 256, 00:18:44.329 "bdev_auto_examine": true, 00:18:44.329 "iobuf_small_cache_size": 128, 00:18:44.329 "iobuf_large_cache_size": 16 00:18:44.329 } 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "method": "bdev_raid_set_options", 00:18:44.329 "params": { 00:18:44.329 "process_window_size_kb": 1024, 00:18:44.329 "process_max_bandwidth_mb_sec": 0 00:18:44.329 } 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "method": "bdev_iscsi_set_options", 00:18:44.329 "params": { 00:18:44.329 "timeout_sec": 30 00:18:44.329 } 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "method": "bdev_nvme_set_options", 00:18:44.329 "params": { 00:18:44.329 "action_on_timeout": "none", 00:18:44.329 "timeout_us": 0, 00:18:44.329 "timeout_admin_us": 0, 00:18:44.329 "keep_alive_timeout_ms": 10000, 00:18:44.329 "arbitration_burst": 0, 00:18:44.329 "low_priority_weight": 0, 00:18:44.329 "medium_priority_weight": 0, 00:18:44.329 "high_priority_weight": 0, 00:18:44.329 "nvme_adminq_poll_period_us": 10000, 00:18:44.329 "nvme_ioq_poll_period_us": 0, 00:18:44.329 "io_queue_requests": 0, 00:18:44.329 "delay_cmd_submit": true, 00:18:44.329 "transport_retry_count": 4, 00:18:44.329 "bdev_retry_count": 3, 00:18:44.329 "transport_ack_timeout": 0, 00:18:44.329 "ctrlr_loss_timeout_sec": 0, 00:18:44.329 "reconnect_delay_sec": 0, 00:18:44.329 "fast_io_fail_timeout_sec": 0, 00:18:44.329 "disable_auto_failback": false, 00:18:44.329 "generate_uuids": false, 00:18:44.329 "transport_tos": 0, 00:18:44.329 "nvme_error_stat": false, 00:18:44.329 "rdma_srq_size": 0, 00:18:44.329 "io_path_stat": false, 00:18:44.329 "allow_accel_sequence": false, 00:18:44.329 "rdma_max_cq_size": 0, 00:18:44.329 "rdma_cm_event_timeout_ms": 0, 00:18:44.329 "dhchap_digests": [ 00:18:44.329 "sha256", 00:18:44.329 "sha384", 00:18:44.329 "sha512" 00:18:44.329 ], 00:18:44.329 "dhchap_dhgroups": [ 00:18:44.329 "null", 00:18:44.329 "ffdhe2048", 00:18:44.329 "ffdhe3072", 00:18:44.329 "ffdhe4096", 00:18:44.329 "ffdhe6144", 00:18:44.329 "ffdhe8192" 00:18:44.329 ] 00:18:44.329 } 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "method": "bdev_nvme_set_hotplug", 00:18:44.329 "params": { 00:18:44.329 "period_us": 100000, 00:18:44.329 "enable": false 00:18:44.329 } 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "method": "bdev_malloc_create", 00:18:44.329 "params": { 00:18:44.329 "name": "malloc0", 00:18:44.329 "num_blocks": 8192, 00:18:44.329 "block_size": 4096, 00:18:44.329 "physical_block_size": 4096, 00:18:44.329 "uuid": "56f05e4f-acde-4ee3-8f1e-aebbb117c8a6", 00:18:44.329 "optimal_io_boundary": 0, 00:18:44.329 "md_size": 0, 00:18:44.329 "dif_type": 0, 00:18:44.329 "dif_is_head_of_md": false, 00:18:44.329 "dif_pi_format": 0 00:18:44.329 } 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "method": "bdev_wait_for_examine" 00:18:44.329 } 00:18:44.329 ] 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "subsystem": "nbd", 00:18:44.329 "config": [] 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "subsystem": "scheduler", 00:18:44.329 "config": [ 00:18:44.329 { 00:18:44.329 "method": "framework_set_scheduler", 00:18:44.329 "params": { 00:18:44.329 "name": "static" 00:18:44.329 } 00:18:44.329 } 00:18:44.329 ] 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "subsystem": "nvmf", 00:18:44.329 "config": [ 00:18:44.329 { 00:18:44.329 "method": "nvmf_set_config", 00:18:44.329 "params": { 00:18:44.329 "discovery_filter": "match_any", 00:18:44.329 "admin_cmd_passthru": { 00:18:44.329 "identify_ctrlr": false 00:18:44.329 }, 00:18:44.329 "dhchap_digests": [ 00:18:44.329 "sha256", 00:18:44.329 "sha384", 00:18:44.329 "sha512" 00:18:44.329 ], 00:18:44.329 "dhchap_dhgroups": [ 00:18:44.329 "null", 00:18:44.329 "ffdhe2048", 00:18:44.329 "ffdhe3072", 00:18:44.329 "ffdhe4096", 00:18:44.329 "ffdhe6144", 00:18:44.329 "ffdhe8192" 00:18:44.329 ] 00:18:44.329 } 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "method": "nvmf_set_max_subsystems", 00:18:44.329 "params": { 00:18:44.329 "max_subsystems": 1024 00:18:44.329 } 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "method": "nvmf_set_crdt", 00:18:44.329 "params": { 00:18:44.329 "crdt1": 0, 00:18:44.329 "crdt2": 0, 00:18:44.329 "crdt3": 0 00:18:44.329 } 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "method": "nvmf_create_transport", 00:18:44.329 "params": { 00:18:44.329 "trtype": "TCP", 00:18:44.329 "max_queue_depth": 128, 00:18:44.329 "max_io_qpairs_per_ctrlr": 127, 00:18:44.329 "in_capsule_data_size": 4096, 00:18:44.329 "max_io_size": 131072, 00:18:44.329 "io_unit_size": 131072, 00:18:44.329 "max_aq_depth": 128, 00:18:44.329 "num_shared_buffers": 511, 00:18:44.329 "buf_cache_size": 4294967295, 00:18:44.329 "dif_insert_or_strip": false, 00:18:44.329 "zcopy": false, 00:18:44.329 "c2h_success": false, 00:18:44.329 "sock_priority": 0, 00:18:44.329 "abort_timeout_sec": 1, 00:18:44.329 "ack_timeout": 0, 00:18:44.329 "data_wr_pool_size": 0 00:18:44.329 } 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "method": "nvmf_create_subsystem", 00:18:44.329 "params": { 00:18:44.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.329 "allow_any_host": false, 00:18:44.329 "serial_number": "SPDK00000000000001", 00:18:44.329 "model_number": "SPDK bdev Controller", 00:18:44.329 "max_namespaces": 10, 00:18:44.329 "min_cntlid": 1, 00:18:44.329 "max_cntlid": 65519, 00:18:44.329 "ana_reporting": false 00:18:44.329 } 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "method": "nvmf_subsystem_add_host", 00:18:44.329 "params": { 00:18:44.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.329 "host": "nqn.2016-06.io.spdk:host1", 00:18:44.329 "psk": "key0" 00:18:44.329 } 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "method": "nvmf_subsystem_add_ns", 00:18:44.329 "params": { 00:18:44.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.329 "namespace": { 00:18:44.329 "nsid": 1, 00:18:44.329 "bdev_name": "malloc0", 00:18:44.329 "nguid": "56F05E4FACDE4EE38F1EAEBBB117C8A6", 00:18:44.329 "uuid": "56f05e4f-acde-4ee3-8f1e-aebbb117c8a6", 00:18:44.329 "no_auto_visible": false 00:18:44.329 } 00:18:44.329 } 00:18:44.329 }, 00:18:44.329 { 00:18:44.329 "method": "nvmf_subsystem_add_listener", 00:18:44.329 "params": { 00:18:44.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.329 "listen_address": { 00:18:44.329 "trtype": "TCP", 00:18:44.329 "adrfam": "IPv4", 00:18:44.330 "traddr": "10.0.0.2", 00:18:44.330 "trsvcid": "4420" 00:18:44.330 }, 00:18:44.330 "secure_channel": true 00:18:44.330 } 00:18:44.330 } 00:18:44.330 ] 00:18:44.330 } 00:18:44.330 ] 00:18:44.330 }' 00:18:44.330 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2416425 00:18:44.330 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:44.330 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2416425 00:18:44.330 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2416425 ']' 00:18:44.330 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.330 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.330 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.330 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.330 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.330 [2024-12-10 04:06:38.537413] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:44.330 [2024-12-10 04:06:38.537513] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.330 [2024-12-10 04:06:38.614946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.330 [2024-12-10 04:06:38.671927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.330 [2024-12-10 04:06:38.671986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.330 [2024-12-10 04:06:38.672024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.330 [2024-12-10 04:06:38.672037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.330 [2024-12-10 04:06:38.672048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.330 [2024-12-10 04:06:38.672677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.589 [2024-12-10 04:06:38.913509] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.589 [2024-12-10 04:06:38.945531] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:44.589 [2024-12-10 04:06:38.945789] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.157 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.157 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:45.157 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.157 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.157 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.415 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.415 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2416574 00:18:45.415 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2416574 /var/tmp/bdevperf.sock 00:18:45.415 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2416574 ']' 00:18:45.415 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.415 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:45.415 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.415 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:45.415 "subsystems": [ 00:18:45.415 { 00:18:45.415 "subsystem": "keyring", 00:18:45.415 "config": [ 00:18:45.415 { 00:18:45.415 "method": "keyring_file_add_key", 00:18:45.415 "params": { 00:18:45.415 "name": "key0", 00:18:45.415 "path": "/tmp/tmp.pZNQTltgiX" 00:18:45.415 } 00:18:45.415 } 00:18:45.415 ] 00:18:45.415 }, 00:18:45.415 { 00:18:45.415 "subsystem": "iobuf", 00:18:45.415 "config": [ 00:18:45.415 { 00:18:45.415 "method": "iobuf_set_options", 00:18:45.415 "params": { 00:18:45.415 "small_pool_count": 8192, 00:18:45.415 "large_pool_count": 1024, 00:18:45.415 "small_bufsize": 8192, 00:18:45.415 "large_bufsize": 135168, 00:18:45.415 "enable_numa": false 00:18:45.415 } 00:18:45.415 } 00:18:45.415 ] 00:18:45.415 }, 00:18:45.415 { 00:18:45.415 "subsystem": "sock", 00:18:45.415 "config": [ 00:18:45.415 { 00:18:45.415 "method": "sock_set_default_impl", 00:18:45.415 "params": { 00:18:45.415 "impl_name": "posix" 00:18:45.415 } 00:18:45.415 }, 00:18:45.415 { 00:18:45.415 "method": "sock_impl_set_options", 00:18:45.415 "params": { 00:18:45.415 "impl_name": "ssl", 00:18:45.415 "recv_buf_size": 4096, 00:18:45.415 "send_buf_size": 4096, 00:18:45.415 "enable_recv_pipe": true, 00:18:45.415 "enable_quickack": false, 00:18:45.415 "enable_placement_id": 0, 00:18:45.416 "enable_zerocopy_send_server": true, 00:18:45.416 "enable_zerocopy_send_client": false, 00:18:45.416 "zerocopy_threshold": 0, 00:18:45.416 "tls_version": 0, 00:18:45.416 "enable_ktls": false 00:18:45.416 } 00:18:45.416 }, 00:18:45.416 { 00:18:45.416 "method": "sock_impl_set_options", 00:18:45.416 "params": { 00:18:45.416 "impl_name": "posix", 00:18:45.416 "recv_buf_size": 2097152, 00:18:45.416 "send_buf_size": 2097152, 00:18:45.416 "enable_recv_pipe": true, 00:18:45.416 "enable_quickack": false, 00:18:45.416 "enable_placement_id": 0, 00:18:45.416 "enable_zerocopy_send_server": true, 00:18:45.416 "enable_zerocopy_send_client": false, 00:18:45.416 "zerocopy_threshold": 0, 00:18:45.416 "tls_version": 0, 00:18:45.416 "enable_ktls": false 00:18:45.416 } 00:18:45.416 } 00:18:45.416 ] 00:18:45.416 }, 00:18:45.416 { 00:18:45.416 "subsystem": "vmd", 00:18:45.416 "config": [] 00:18:45.416 }, 00:18:45.416 { 00:18:45.416 "subsystem": "accel", 00:18:45.416 "config": [ 00:18:45.416 { 00:18:45.416 "method": "accel_set_options", 00:18:45.416 "params": { 00:18:45.416 "small_cache_size": 128, 00:18:45.416 "large_cache_size": 16, 00:18:45.416 "task_count": 2048, 00:18:45.416 "sequence_count": 2048, 00:18:45.416 "buf_count": 2048 00:18:45.416 } 00:18:45.416 } 00:18:45.416 ] 00:18:45.416 }, 00:18:45.416 { 00:18:45.416 "subsystem": "bdev", 00:18:45.416 "config": [ 00:18:45.416 { 00:18:45.416 "method": "bdev_set_options", 00:18:45.416 "params": { 00:18:45.416 "bdev_io_pool_size": 65535, 00:18:45.416 "bdev_io_cache_size": 256, 00:18:45.416 "bdev_auto_examine": true, 00:18:45.416 "iobuf_small_cache_size": 128, 00:18:45.416 "iobuf_large_cache_size": 16 00:18:45.416 } 00:18:45.416 }, 00:18:45.416 { 00:18:45.416 "method": "bdev_raid_set_options", 00:18:45.416 "params": { 00:18:45.416 "process_window_size_kb": 1024, 00:18:45.416 "process_max_bandwidth_mb_sec": 0 00:18:45.416 } 00:18:45.416 }, 00:18:45.416 { 00:18:45.416 "method": "bdev_iscsi_set_options", 00:18:45.416 "params": { 00:18:45.416 "timeout_sec": 30 00:18:45.416 } 00:18:45.416 }, 00:18:45.416 { 00:18:45.416 "method": "bdev_nvme_set_options", 00:18:45.416 "params": { 00:18:45.416 "action_on_timeout": "none", 00:18:45.416 "timeout_us": 0, 00:18:45.416 "timeout_admin_us": 0, 00:18:45.416 "keep_alive_timeout_ms": 10000, 00:18:45.416 "arbitration_burst": 0, 00:18:45.416 "low_priority_weight": 0, 00:18:45.416 "medium_priority_weight": 0, 00:18:45.416 "high_priority_weight": 0, 00:18:45.416 "nvme_adminq_poll_period_us": 10000, 00:18:45.416 "nvme_ioq_poll_period_us": 0, 00:18:45.416 "io_queue_requests": 512, 00:18:45.416 "delay_cmd_submit": true, 00:18:45.416 "transport_retry_count": 4, 00:18:45.416 "bdev_retry_count": 3, 00:18:45.416 "transport_ack_timeout": 0, 00:18:45.416 "ctrlr_loss_timeout_sec": 0, 00:18:45.416 "reconnect_delay_sec": 0, 00:18:45.416 "fast_io_fail_timeout_sec": 0, 00:18:45.416 "disable_auto_failback": false, 00:18:45.416 "generate_uuids": false, 00:18:45.416 "transport_tos": 0, 00:18:45.416 "nvme_error_stat": false, 00:18:45.416 "rdma_srq_size": 0, 00:18:45.416 "io_path_stat": false, 00:18:45.416 "allow_accel_sequence": false, 00:18:45.416 "rdma_max_cq_size": 0, 00:18:45.416 "rdma_cm_event_timeout_ms": 0, 00:18:45.416 "dhchap_digests": [ 00:18:45.416 "sha256", 00:18:45.416 "sha384", 00:18:45.416 "sha512" 00:18:45.416 ], 00:18:45.416 "dhchap_dhgroups": [ 00:18:45.416 "null", 00:18:45.416 "ffdhe2048", 00:18:45.416 "ffdhe3072", 00:18:45.416 "ffdhe4096", 00:18:45.416 "ffdhe6144", 00:18:45.416 "ffdhe8192" 00:18:45.416 ] 00:18:45.416 } 00:18:45.416 }, 00:18:45.416 { 00:18:45.416 "method": "bdev_nvme_attach_controller", 00:18:45.416 "params": { 00:18:45.416 "name": "TLSTEST", 00:18:45.416 "trtype": "TCP", 00:18:45.416 "adrfam": "IPv4", 00:18:45.416 "traddr": "10.0.0.2", 00:18:45.416 "trsvcid": "4420", 00:18:45.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.416 "prchk_reftag": false, 00:18:45.416 "prchk_guard": false, 00:18:45.416 "ctrlr_loss_timeout_sec": 0, 00:18:45.416 "reconnect_delay_sec": 0, 00:18:45.416 "fast_io_fail_timeout_sec": 0, 00:18:45.416 "psk": "key0", 00:18:45.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.416 "hdgst": false, 00:18:45.416 "ddgst": false, 00:18:45.416 "multipath": "multipath" 00:18:45.416 } 00:18:45.416 }, 00:18:45.416 { 00:18:45.416 "method": "bdev_nvme_set_hotplug", 00:18:45.416 "params": { 00:18:45.416 "period_us": 100000, 00:18:45.416 "enable": false 00:18:45.416 } 00:18:45.416 }, 00:18:45.416 { 00:18:45.416 "method": "bdev_wait_for_examine" 00:18:45.416 } 00:18:45.416 ] 00:18:45.416 }, 00:18:45.416 { 00:18:45.416 "subsystem": "nbd", 00:18:45.416 "config": [] 00:18:45.416 } 00:18:45.416 ] 00:18:45.416 }' 00:18:45.416 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.416 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.416 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.416 [2024-12-10 04:06:39.595176] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:45.416 [2024-12-10 04:06:39.595262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2416574 ] 00:18:45.416 [2024-12-10 04:06:39.662352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.416 [2024-12-10 04:06:39.720193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.674 [2024-12-10 04:06:39.904664] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.674 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.674 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:45.674 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:45.933 Running I/O for 10 seconds... 00:18:47.807 3597.00 IOPS, 14.05 MiB/s [2024-12-10T03:06:43.577Z] 3609.50 IOPS, 14.10 MiB/s [2024-12-10T03:06:44.516Z] 3551.00 IOPS, 13.87 MiB/s [2024-12-10T03:06:45.454Z] 3579.50 IOPS, 13.98 MiB/s [2024-12-10T03:06:46.392Z] 3592.20 IOPS, 14.03 MiB/s [2024-12-10T03:06:47.329Z] 3605.83 IOPS, 14.09 MiB/s [2024-12-10T03:06:48.267Z] 3618.57 IOPS, 14.14 MiB/s [2024-12-10T03:06:49.207Z] 3617.75 IOPS, 14.13 MiB/s [2024-12-10T03:06:50.586Z] 3619.22 IOPS, 14.14 MiB/s [2024-12-10T03:06:50.586Z] 3620.10 IOPS, 14.14 MiB/s 00:18:56.197 Latency(us) 00:18:56.197 [2024-12-10T03:06:50.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.197 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:56.197 Verification LBA range: start 0x0 length 0x2000 00:18:56.197 TLSTESTn1 : 10.02 3626.02 14.16 0.00 0.00 35242.56 6407.96 37865.24 00:18:56.197 [2024-12-10T03:06:50.586Z] =================================================================================================================== 00:18:56.197 [2024-12-10T03:06:50.586Z] Total : 3626.02 14.16 0.00 0.00 35242.56 6407.96 37865.24 00:18:56.197 { 00:18:56.197 "results": [ 00:18:56.197 { 00:18:56.197 "job": "TLSTESTn1", 00:18:56.197 "core_mask": "0x4", 00:18:56.197 "workload": "verify", 00:18:56.197 "status": "finished", 00:18:56.197 "verify_range": { 00:18:56.197 "start": 0, 00:18:56.197 "length": 8192 00:18:56.197 }, 00:18:56.197 "queue_depth": 128, 00:18:56.197 "io_size": 4096, 00:18:56.197 "runtime": 10.018692, 00:18:56.197 "iops": 3626.0222392304304, 00:18:56.197 "mibps": 14.164149371993869, 00:18:56.197 "io_failed": 0, 00:18:56.197 "io_timeout": 0, 00:18:56.197 "avg_latency_us": 35242.55500191669, 00:18:56.197 "min_latency_us": 6407.964444444445, 00:18:56.197 "max_latency_us": 37865.24444444444 00:18:56.197 } 00:18:56.197 ], 00:18:56.197 "core_count": 1 00:18:56.197 } 00:18:56.197 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:56.197 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2416574 00:18:56.197 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2416574 ']' 00:18:56.197 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2416574 00:18:56.197 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:56.197 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.197 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2416574 00:18:56.197 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:56.197 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:56.197 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2416574' 00:18:56.197 killing process with pid 2416574 00:18:56.197 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2416574 00:18:56.197 Received shutdown signal, test time was about 10.000000 seconds 00:18:56.197 00:18:56.197 Latency(us) 00:18:56.197 [2024-12-10T03:06:50.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.197 [2024-12-10T03:06:50.586Z] =================================================================================================================== 00:18:56.197 [2024-12-10T03:06:50.587Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:56.198 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2416574 00:18:56.198 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2416425 00:18:56.198 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2416425 ']' 00:18:56.198 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2416425 00:18:56.198 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:56.198 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.198 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2416425 00:18:56.198 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:56.198 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:56.198 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2416425' 00:18:56.198 killing process with pid 2416425 00:18:56.198 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2416425 00:18:56.198 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2416425 00:18:56.458 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:56.458 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:56.458 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:56.458 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.458 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2417898 00:18:56.458 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:56.458 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2417898 00:18:56.458 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2417898 ']' 00:18:56.458 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.458 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.458 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.458 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.458 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.458 [2024-12-10 04:06:50.768055] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:56.458 [2024-12-10 04:06:50.768143] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.717 [2024-12-10 04:06:50.847701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.717 [2024-12-10 04:06:50.904912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.717 [2024-12-10 04:06:50.904980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.717 [2024-12-10 04:06:50.905008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.717 [2024-12-10 04:06:50.905019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.717 [2024-12-10 04:06:50.905028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.717 [2024-12-10 04:06:50.905691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.717 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.717 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:56.717 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:56.717 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:56.717 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.717 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.717 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.pZNQTltgiX 00:18:56.717 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pZNQTltgiX 00:18:56.717 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:56.976 [2024-12-10 04:06:51.350654] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.235 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:57.492 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:57.750 [2024-12-10 04:06:51.908119] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:57.750 [2024-12-10 04:06:51.908364] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.750 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:58.008 malloc0 00:18:58.008 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:58.266 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pZNQTltgiX 00:18:58.524 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:58.782 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2418182 00:18:58.782 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:58.782 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:58.782 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2418182 /var/tmp/bdevperf.sock 00:18:58.782 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2418182 ']' 00:18:58.782 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.782 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.782 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.782 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.782 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.782 [2024-12-10 04:06:53.032859] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:58.782 [2024-12-10 04:06:53.032940] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2418182 ] 00:18:58.782 [2024-12-10 04:06:53.097827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.782 [2024-12-10 04:06:53.156693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.039 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.039 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:59.039 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pZNQTltgiX 00:18:59.298 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:59.556 [2024-12-10 04:06:53.785602] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.556 nvme0n1 00:18:59.556 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:59.815 Running I/O for 1 seconds... 00:19:00.754 3557.00 IOPS, 13.89 MiB/s 00:19:00.754 Latency(us) 00:19:00.754 [2024-12-10T03:06:55.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.754 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:00.754 Verification LBA range: start 0x0 length 0x2000 00:19:00.754 nvme0n1 : 1.02 3609.93 14.10 0.00 0.00 35116.05 9126.49 27185.30 00:19:00.754 [2024-12-10T03:06:55.143Z] =================================================================================================================== 00:19:00.754 [2024-12-10T03:06:55.143Z] Total : 3609.93 14.10 0.00 0.00 35116.05 9126.49 27185.30 00:19:00.754 { 00:19:00.754 "results": [ 00:19:00.754 { 00:19:00.754 "job": "nvme0n1", 00:19:00.754 "core_mask": "0x2", 00:19:00.754 "workload": "verify", 00:19:00.754 "status": "finished", 00:19:00.754 "verify_range": { 00:19:00.754 "start": 0, 00:19:00.754 "length": 8192 00:19:00.754 }, 00:19:00.754 "queue_depth": 128, 00:19:00.754 "io_size": 4096, 00:19:00.754 "runtime": 1.020795, 00:19:00.754 "iops": 3609.931474977836, 00:19:00.754 "mibps": 14.101294824132172, 00:19:00.754 "io_failed": 0, 00:19:00.754 "io_timeout": 0, 00:19:00.754 "avg_latency_us": 35116.045636866176, 00:19:00.754 "min_latency_us": 9126.494814814814, 00:19:00.754 "max_latency_us": 27185.303703703703 00:19:00.754 } 00:19:00.754 ], 00:19:00.754 "core_count": 1 00:19:00.754 } 00:19:00.754 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2418182 00:19:00.754 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2418182 ']' 00:19:00.754 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2418182 00:19:00.754 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.754 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.754 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2418182 00:19:00.754 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:00.754 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:00.754 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2418182' 00:19:00.754 killing process with pid 2418182 00:19:00.754 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2418182 00:19:00.754 Received shutdown signal, test time was about 1.000000 seconds 00:19:00.754 00:19:00.754 Latency(us) 00:19:00.754 [2024-12-10T03:06:55.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.754 [2024-12-10T03:06:55.143Z] =================================================================================================================== 00:19:00.754 [2024-12-10T03:06:55.143Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:00.754 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2418182 00:19:01.014 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2417898 00:19:01.014 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2417898 ']' 00:19:01.014 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2417898 00:19:01.014 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:01.014 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.014 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2417898 00:19:01.014 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.014 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.014 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2417898' 00:19:01.014 killing process with pid 2417898 00:19:01.014 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2417898 00:19:01.014 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2417898 00:19:01.272 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:01.272 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:01.272 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:01.272 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.272 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2418468 00:19:01.272 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:01.272 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2418468 00:19:01.272 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2418468 ']' 00:19:01.272 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.272 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.272 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.272 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.272 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.272 [2024-12-10 04:06:55.556136] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:01.272 [2024-12-10 04:06:55.556229] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.272 [2024-12-10 04:06:55.626085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.530 [2024-12-10 04:06:55.675655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.530 [2024-12-10 04:06:55.675716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.530 [2024-12-10 04:06:55.675744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.530 [2024-12-10 04:06:55.675755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.530 [2024-12-10 04:06:55.675772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.530 [2024-12-10 04:06:55.676343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.530 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.530 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:01.530 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:01.530 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.530 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.530 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.530 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:01.530 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.530 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.530 [2024-12-10 04:06:55.815973] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.530 malloc0 00:19:01.530 [2024-12-10 04:06:55.846336] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:01.530 [2024-12-10 04:06:55.846613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.530 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.530 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2418499 00:19:01.530 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:01.531 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2418499 /var/tmp/bdevperf.sock 00:19:01.531 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2418499 ']' 00:19:01.531 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.531 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.531 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.531 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.531 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.789 [2024-12-10 04:06:55.920902] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:01.789 [2024-12-10 04:06:55.920986] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2418499 ] 00:19:01.789 [2024-12-10 04:06:55.991948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.789 [2024-12-10 04:06:56.048386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.789 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.789 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:01.789 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pZNQTltgiX 00:19:02.047 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:02.307 [2024-12-10 04:06:56.651740] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.568 nvme0n1 00:19:02.568 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:02.568 Running I/O for 1 seconds... 00:19:03.509 3439.00 IOPS, 13.43 MiB/s 00:19:03.509 Latency(us) 00:19:03.509 [2024-12-10T03:06:57.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.509 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:03.509 Verification LBA range: start 0x0 length 0x2000 00:19:03.509 nvme0n1 : 1.04 3444.44 13.45 0.00 0.00 36610.97 10048.85 34952.53 00:19:03.509 [2024-12-10T03:06:57.898Z] =================================================================================================================== 00:19:03.509 [2024-12-10T03:06:57.898Z] Total : 3444.44 13.45 0.00 0.00 36610.97 10048.85 34952.53 00:19:03.509 { 00:19:03.509 "results": [ 00:19:03.509 { 00:19:03.509 "job": "nvme0n1", 00:19:03.509 "core_mask": "0x2", 00:19:03.509 "workload": "verify", 00:19:03.509 "status": "finished", 00:19:03.509 "verify_range": { 00:19:03.509 "start": 0, 00:19:03.509 "length": 8192 00:19:03.509 }, 00:19:03.509 "queue_depth": 128, 00:19:03.509 "io_size": 4096, 00:19:03.509 "runtime": 1.035873, 00:19:03.509 "iops": 3444.4376868592963, 00:19:03.509 "mibps": 13.454834714294126, 00:19:03.509 "io_failed": 0, 00:19:03.509 "io_timeout": 0, 00:19:03.509 "avg_latency_us": 36610.97454907822, 00:19:03.509 "min_latency_us": 10048.853333333333, 00:19:03.509 "max_latency_us": 34952.53333333333 00:19:03.509 } 00:19:03.509 ], 00:19:03.509 "core_count": 1 00:19:03.509 } 00:19:03.776 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:03.777 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.777 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.777 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.777 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:03.777 "subsystems": [ 00:19:03.777 { 00:19:03.777 "subsystem": "keyring", 00:19:03.777 "config": [ 00:19:03.777 { 00:19:03.777 "method": "keyring_file_add_key", 00:19:03.777 "params": { 00:19:03.777 "name": "key0", 00:19:03.777 "path": "/tmp/tmp.pZNQTltgiX" 00:19:03.777 } 00:19:03.777 } 00:19:03.777 ] 00:19:03.777 }, 00:19:03.777 { 00:19:03.777 "subsystem": "iobuf", 00:19:03.777 "config": [ 00:19:03.777 { 00:19:03.777 "method": "iobuf_set_options", 00:19:03.777 "params": { 00:19:03.777 "small_pool_count": 8192, 00:19:03.777 "large_pool_count": 1024, 00:19:03.777 "small_bufsize": 8192, 00:19:03.777 "large_bufsize": 135168, 00:19:03.777 "enable_numa": false 00:19:03.777 } 00:19:03.777 } 00:19:03.777 ] 00:19:03.777 }, 00:19:03.777 { 00:19:03.777 "subsystem": "sock", 00:19:03.777 "config": [ 00:19:03.777 { 00:19:03.777 "method": "sock_set_default_impl", 00:19:03.777 "params": { 00:19:03.777 "impl_name": "posix" 00:19:03.777 } 00:19:03.777 }, 00:19:03.777 { 00:19:03.777 "method": "sock_impl_set_options", 00:19:03.777 "params": { 00:19:03.777 "impl_name": "ssl", 00:19:03.777 "recv_buf_size": 4096, 00:19:03.777 "send_buf_size": 4096, 00:19:03.777 "enable_recv_pipe": true, 00:19:03.777 "enable_quickack": false, 00:19:03.777 "enable_placement_id": 0, 00:19:03.777 "enable_zerocopy_send_server": true, 00:19:03.777 "enable_zerocopy_send_client": false, 00:19:03.777 "zerocopy_threshold": 0, 00:19:03.777 "tls_version": 0, 00:19:03.777 "enable_ktls": false 00:19:03.777 } 00:19:03.777 }, 00:19:03.777 { 00:19:03.777 "method": "sock_impl_set_options", 00:19:03.777 "params": { 00:19:03.777 "impl_name": "posix", 00:19:03.777 "recv_buf_size": 2097152, 00:19:03.777 "send_buf_size": 2097152, 00:19:03.777 "enable_recv_pipe": true, 00:19:03.777 "enable_quickack": false, 00:19:03.777 "enable_placement_id": 0, 00:19:03.777 "enable_zerocopy_send_server": true, 00:19:03.777 "enable_zerocopy_send_client": false, 00:19:03.777 "zerocopy_threshold": 0, 00:19:03.777 "tls_version": 0, 00:19:03.777 "enable_ktls": false 00:19:03.777 } 00:19:03.777 } 00:19:03.777 ] 00:19:03.777 }, 00:19:03.777 { 00:19:03.777 "subsystem": "vmd", 00:19:03.777 "config": [] 00:19:03.777 }, 00:19:03.777 { 00:19:03.777 "subsystem": "accel", 00:19:03.777 "config": [ 00:19:03.777 { 00:19:03.777 "method": "accel_set_options", 00:19:03.777 "params": { 00:19:03.777 "small_cache_size": 128, 00:19:03.777 "large_cache_size": 16, 00:19:03.777 "task_count": 2048, 00:19:03.777 "sequence_count": 2048, 00:19:03.777 "buf_count": 2048 00:19:03.777 } 00:19:03.777 } 00:19:03.777 ] 00:19:03.777 }, 00:19:03.777 { 00:19:03.777 "subsystem": "bdev", 00:19:03.777 "config": [ 00:19:03.777 { 00:19:03.777 "method": "bdev_set_options", 00:19:03.777 "params": { 00:19:03.777 "bdev_io_pool_size": 65535, 00:19:03.777 "bdev_io_cache_size": 256, 00:19:03.777 "bdev_auto_examine": true, 00:19:03.777 "iobuf_small_cache_size": 128, 00:19:03.777 "iobuf_large_cache_size": 16 00:19:03.777 } 00:19:03.777 }, 00:19:03.777 { 00:19:03.777 "method": "bdev_raid_set_options", 00:19:03.777 "params": { 00:19:03.777 "process_window_size_kb": 1024, 00:19:03.777 "process_max_bandwidth_mb_sec": 0 00:19:03.777 } 00:19:03.777 }, 00:19:03.777 { 00:19:03.777 "method": "bdev_iscsi_set_options", 00:19:03.777 "params": { 00:19:03.777 "timeout_sec": 30 00:19:03.777 } 00:19:03.777 }, 00:19:03.777 { 00:19:03.777 "method": "bdev_nvme_set_options", 00:19:03.777 "params": { 00:19:03.777 "action_on_timeout": "none", 00:19:03.777 "timeout_us": 0, 00:19:03.777 "timeout_admin_us": 0, 00:19:03.777 "keep_alive_timeout_ms": 10000, 00:19:03.777 "arbitration_burst": 0, 00:19:03.777 "low_priority_weight": 0, 00:19:03.777 "medium_priority_weight": 0, 00:19:03.777 "high_priority_weight": 0, 00:19:03.777 "nvme_adminq_poll_period_us": 10000, 00:19:03.777 "nvme_ioq_poll_period_us": 0, 00:19:03.777 "io_queue_requests": 0, 00:19:03.777 "delay_cmd_submit": true, 00:19:03.777 "transport_retry_count": 4, 00:19:03.777 "bdev_retry_count": 3, 00:19:03.777 "transport_ack_timeout": 0, 00:19:03.777 "ctrlr_loss_timeout_sec": 0, 00:19:03.777 "reconnect_delay_sec": 0, 00:19:03.777 "fast_io_fail_timeout_sec": 0, 00:19:03.777 "disable_auto_failback": false, 00:19:03.777 "generate_uuids": false, 00:19:03.777 "transport_tos": 0, 00:19:03.777 "nvme_error_stat": false, 00:19:03.777 "rdma_srq_size": 0, 00:19:03.777 "io_path_stat": false, 00:19:03.777 "allow_accel_sequence": false, 00:19:03.777 "rdma_max_cq_size": 0, 00:19:03.777 "rdma_cm_event_timeout_ms": 0, 00:19:03.777 "dhchap_digests": [ 00:19:03.777 "sha256", 00:19:03.777 "sha384", 00:19:03.777 "sha512" 00:19:03.777 ], 00:19:03.777 "dhchap_dhgroups": [ 00:19:03.777 "null", 00:19:03.777 "ffdhe2048", 00:19:03.777 "ffdhe3072", 00:19:03.777 "ffdhe4096", 00:19:03.777 "ffdhe6144", 00:19:03.777 "ffdhe8192" 00:19:03.777 ] 00:19:03.777 } 00:19:03.777 }, 00:19:03.777 { 00:19:03.777 "method": "bdev_nvme_set_hotplug", 00:19:03.777 "params": { 00:19:03.777 "period_us": 100000, 00:19:03.777 "enable": false 00:19:03.777 } 00:19:03.777 }, 00:19:03.777 { 00:19:03.777 "method": "bdev_malloc_create", 00:19:03.777 "params": { 00:19:03.777 "name": "malloc0", 00:19:03.777 "num_blocks": 8192, 00:19:03.777 "block_size": 4096, 00:19:03.777 "physical_block_size": 4096, 00:19:03.777 "uuid": "d498d850-a1fa-405b-96d3-c94fe2cf8421", 00:19:03.777 "optimal_io_boundary": 0, 00:19:03.777 "md_size": 0, 00:19:03.778 "dif_type": 0, 00:19:03.778 "dif_is_head_of_md": false, 00:19:03.778 "dif_pi_format": 0 00:19:03.778 } 00:19:03.778 }, 00:19:03.778 { 00:19:03.778 "method": "bdev_wait_for_examine" 00:19:03.778 } 00:19:03.778 ] 00:19:03.778 }, 00:19:03.778 { 00:19:03.778 "subsystem": "nbd", 00:19:03.778 "config": [] 00:19:03.778 }, 00:19:03.778 { 00:19:03.778 "subsystem": "scheduler", 00:19:03.778 "config": [ 00:19:03.778 { 00:19:03.778 "method": "framework_set_scheduler", 00:19:03.778 "params": { 00:19:03.778 "name": "static" 00:19:03.778 } 00:19:03.778 } 00:19:03.778 ] 00:19:03.778 }, 00:19:03.778 { 00:19:03.778 "subsystem": "nvmf", 00:19:03.778 "config": [ 00:19:03.778 { 00:19:03.778 "method": "nvmf_set_config", 00:19:03.778 "params": { 00:19:03.778 "discovery_filter": "match_any", 00:19:03.778 "admin_cmd_passthru": { 00:19:03.778 "identify_ctrlr": false 00:19:03.778 }, 00:19:03.778 "dhchap_digests": [ 00:19:03.778 "sha256", 00:19:03.778 "sha384", 00:19:03.778 "sha512" 00:19:03.778 ], 00:19:03.778 "dhchap_dhgroups": [ 00:19:03.778 "null", 00:19:03.778 "ffdhe2048", 00:19:03.778 "ffdhe3072", 00:19:03.778 "ffdhe4096", 00:19:03.778 "ffdhe6144", 00:19:03.778 "ffdhe8192" 00:19:03.778 ] 00:19:03.778 } 00:19:03.778 }, 00:19:03.778 { 00:19:03.778 "method": "nvmf_set_max_subsystems", 00:19:03.778 "params": { 00:19:03.778 "max_subsystems": 1024 00:19:03.778 } 00:19:03.778 }, 00:19:03.778 { 00:19:03.778 "method": "nvmf_set_crdt", 00:19:03.778 "params": { 00:19:03.778 "crdt1": 0, 00:19:03.778 "crdt2": 0, 00:19:03.778 "crdt3": 0 00:19:03.778 } 00:19:03.778 }, 00:19:03.778 { 00:19:03.778 "method": "nvmf_create_transport", 00:19:03.778 "params": { 00:19:03.778 "trtype": "TCP", 00:19:03.778 "max_queue_depth": 128, 00:19:03.778 "max_io_qpairs_per_ctrlr": 127, 00:19:03.778 "in_capsule_data_size": 4096, 00:19:03.778 "max_io_size": 131072, 00:19:03.778 "io_unit_size": 131072, 00:19:03.778 "max_aq_depth": 128, 00:19:03.778 "num_shared_buffers": 511, 00:19:03.778 "buf_cache_size": 4294967295, 00:19:03.778 "dif_insert_or_strip": false, 00:19:03.778 "zcopy": false, 00:19:03.778 "c2h_success": false, 00:19:03.778 "sock_priority": 0, 00:19:03.778 "abort_timeout_sec": 1, 00:19:03.778 "ack_timeout": 0, 00:19:03.778 "data_wr_pool_size": 0 00:19:03.778 } 00:19:03.778 }, 00:19:03.778 { 00:19:03.778 "method": "nvmf_create_subsystem", 00:19:03.778 "params": { 00:19:03.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.778 "allow_any_host": false, 00:19:03.778 "serial_number": "00000000000000000000", 00:19:03.778 "model_number": "SPDK bdev Controller", 00:19:03.778 "max_namespaces": 32, 00:19:03.778 "min_cntlid": 1, 00:19:03.778 "max_cntlid": 65519, 00:19:03.778 "ana_reporting": false 00:19:03.778 } 00:19:03.778 }, 00:19:03.778 { 00:19:03.778 "method": "nvmf_subsystem_add_host", 00:19:03.778 "params": { 00:19:03.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.778 "host": "nqn.2016-06.io.spdk:host1", 00:19:03.778 "psk": "key0" 00:19:03.778 } 00:19:03.778 }, 00:19:03.778 { 00:19:03.778 "method": "nvmf_subsystem_add_ns", 00:19:03.778 "params": { 00:19:03.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.778 "namespace": { 00:19:03.778 "nsid": 1, 00:19:03.778 "bdev_name": "malloc0", 00:19:03.778 "nguid": "D498D850A1FA405B96D3C94FE2CF8421", 00:19:03.778 "uuid": "d498d850-a1fa-405b-96d3-c94fe2cf8421", 00:19:03.778 "no_auto_visible": false 00:19:03.778 } 00:19:03.778 } 00:19:03.778 }, 00:19:03.778 { 00:19:03.778 "method": "nvmf_subsystem_add_listener", 00:19:03.778 "params": { 00:19:03.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.778 "listen_address": { 00:19:03.778 "trtype": "TCP", 00:19:03.778 "adrfam": "IPv4", 00:19:03.778 "traddr": "10.0.0.2", 00:19:03.778 "trsvcid": "4420" 00:19:03.778 }, 00:19:03.778 "secure_channel": false, 00:19:03.778 "sock_impl": "ssl" 00:19:03.778 } 00:19:03.778 } 00:19:03.778 ] 00:19:03.778 } 00:19:03.778 ] 00:19:03.778 }' 00:19:03.778 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:04.039 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:04.039 "subsystems": [ 00:19:04.039 { 00:19:04.039 "subsystem": "keyring", 00:19:04.039 "config": [ 00:19:04.039 { 00:19:04.039 "method": "keyring_file_add_key", 00:19:04.039 "params": { 00:19:04.039 "name": "key0", 00:19:04.039 "path": "/tmp/tmp.pZNQTltgiX" 00:19:04.039 } 00:19:04.039 } 00:19:04.039 ] 00:19:04.039 }, 00:19:04.039 { 00:19:04.039 "subsystem": "iobuf", 00:19:04.039 "config": [ 00:19:04.039 { 00:19:04.039 "method": "iobuf_set_options", 00:19:04.039 "params": { 00:19:04.039 "small_pool_count": 8192, 00:19:04.039 "large_pool_count": 1024, 00:19:04.039 "small_bufsize": 8192, 00:19:04.039 "large_bufsize": 135168, 00:19:04.039 "enable_numa": false 00:19:04.039 } 00:19:04.039 } 00:19:04.039 ] 00:19:04.039 }, 00:19:04.039 { 00:19:04.039 "subsystem": "sock", 00:19:04.039 "config": [ 00:19:04.039 { 00:19:04.039 "method": "sock_set_default_impl", 00:19:04.039 "params": { 00:19:04.039 "impl_name": "posix" 00:19:04.039 } 00:19:04.039 }, 00:19:04.039 { 00:19:04.039 "method": "sock_impl_set_options", 00:19:04.039 "params": { 00:19:04.039 "impl_name": "ssl", 00:19:04.039 "recv_buf_size": 4096, 00:19:04.039 "send_buf_size": 4096, 00:19:04.039 "enable_recv_pipe": true, 00:19:04.039 "enable_quickack": false, 00:19:04.039 "enable_placement_id": 0, 00:19:04.039 "enable_zerocopy_send_server": true, 00:19:04.039 "enable_zerocopy_send_client": false, 00:19:04.039 "zerocopy_threshold": 0, 00:19:04.039 "tls_version": 0, 00:19:04.039 "enable_ktls": false 00:19:04.039 } 00:19:04.039 }, 00:19:04.039 { 00:19:04.039 "method": "sock_impl_set_options", 00:19:04.039 "params": { 00:19:04.039 "impl_name": "posix", 00:19:04.039 "recv_buf_size": 2097152, 00:19:04.039 "send_buf_size": 2097152, 00:19:04.039 "enable_recv_pipe": true, 00:19:04.039 "enable_quickack": false, 00:19:04.039 "enable_placement_id": 0, 00:19:04.039 "enable_zerocopy_send_server": true, 00:19:04.039 "enable_zerocopy_send_client": false, 00:19:04.039 "zerocopy_threshold": 0, 00:19:04.039 "tls_version": 0, 00:19:04.039 "enable_ktls": false 00:19:04.039 } 00:19:04.039 } 00:19:04.039 ] 00:19:04.039 }, 00:19:04.039 { 00:19:04.039 "subsystem": "vmd", 00:19:04.039 "config": [] 00:19:04.039 }, 00:19:04.039 { 00:19:04.039 "subsystem": "accel", 00:19:04.039 "config": [ 00:19:04.039 { 00:19:04.039 "method": "accel_set_options", 00:19:04.039 "params": { 00:19:04.039 "small_cache_size": 128, 00:19:04.039 "large_cache_size": 16, 00:19:04.039 "task_count": 2048, 00:19:04.039 "sequence_count": 2048, 00:19:04.039 "buf_count": 2048 00:19:04.039 } 00:19:04.039 } 00:19:04.039 ] 00:19:04.039 }, 00:19:04.039 { 00:19:04.039 "subsystem": "bdev", 00:19:04.039 "config": [ 00:19:04.039 { 00:19:04.039 "method": "bdev_set_options", 00:19:04.039 "params": { 00:19:04.039 "bdev_io_pool_size": 65535, 00:19:04.039 "bdev_io_cache_size": 256, 00:19:04.039 "bdev_auto_examine": true, 00:19:04.039 "iobuf_small_cache_size": 128, 00:19:04.039 "iobuf_large_cache_size": 16 00:19:04.039 } 00:19:04.039 }, 00:19:04.039 { 00:19:04.039 "method": "bdev_raid_set_options", 00:19:04.039 "params": { 00:19:04.039 "process_window_size_kb": 1024, 00:19:04.039 "process_max_bandwidth_mb_sec": 0 00:19:04.039 } 00:19:04.039 }, 00:19:04.039 { 00:19:04.039 "method": "bdev_iscsi_set_options", 00:19:04.039 "params": { 00:19:04.039 "timeout_sec": 30 00:19:04.039 } 00:19:04.039 }, 00:19:04.039 { 00:19:04.039 "method": "bdev_nvme_set_options", 00:19:04.039 "params": { 00:19:04.039 "action_on_timeout": "none", 00:19:04.039 "timeout_us": 0, 00:19:04.039 "timeout_admin_us": 0, 00:19:04.039 "keep_alive_timeout_ms": 10000, 00:19:04.039 "arbitration_burst": 0, 00:19:04.039 "low_priority_weight": 0, 00:19:04.039 "medium_priority_weight": 0, 00:19:04.039 "high_priority_weight": 0, 00:19:04.039 "nvme_adminq_poll_period_us": 10000, 00:19:04.039 "nvme_ioq_poll_period_us": 0, 00:19:04.039 "io_queue_requests": 512, 00:19:04.040 "delay_cmd_submit": true, 00:19:04.040 "transport_retry_count": 4, 00:19:04.040 "bdev_retry_count": 3, 00:19:04.040 "transport_ack_timeout": 0, 00:19:04.040 "ctrlr_loss_timeout_sec": 0, 00:19:04.040 "reconnect_delay_sec": 0, 00:19:04.040 "fast_io_fail_timeout_sec": 0, 00:19:04.040 "disable_auto_failback": false, 00:19:04.040 "generate_uuids": false, 00:19:04.040 "transport_tos": 0, 00:19:04.040 "nvme_error_stat": false, 00:19:04.040 "rdma_srq_size": 0, 00:19:04.040 "io_path_stat": false, 00:19:04.040 "allow_accel_sequence": false, 00:19:04.040 "rdma_max_cq_size": 0, 00:19:04.040 "rdma_cm_event_timeout_ms": 0, 00:19:04.040 "dhchap_digests": [ 00:19:04.040 "sha256", 00:19:04.040 "sha384", 00:19:04.040 "sha512" 00:19:04.040 ], 00:19:04.040 "dhchap_dhgroups": [ 00:19:04.040 "null", 00:19:04.040 "ffdhe2048", 00:19:04.040 "ffdhe3072", 00:19:04.040 "ffdhe4096", 00:19:04.040 "ffdhe6144", 00:19:04.040 "ffdhe8192" 00:19:04.040 ] 00:19:04.040 } 00:19:04.040 }, 00:19:04.040 { 00:19:04.040 "method": "bdev_nvme_attach_controller", 00:19:04.040 "params": { 00:19:04.040 "name": "nvme0", 00:19:04.040 "trtype": "TCP", 00:19:04.040 "adrfam": "IPv4", 00:19:04.040 "traddr": "10.0.0.2", 00:19:04.040 "trsvcid": "4420", 00:19:04.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.040 "prchk_reftag": false, 00:19:04.040 "prchk_guard": false, 00:19:04.040 "ctrlr_loss_timeout_sec": 0, 00:19:04.040 "reconnect_delay_sec": 0, 00:19:04.040 "fast_io_fail_timeout_sec": 0, 00:19:04.040 "psk": "key0", 00:19:04.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.040 "hdgst": false, 00:19:04.040 "ddgst": false, 00:19:04.040 "multipath": "multipath" 00:19:04.040 } 00:19:04.040 }, 00:19:04.040 { 00:19:04.040 "method": "bdev_nvme_set_hotplug", 00:19:04.040 "params": { 00:19:04.040 "period_us": 100000, 00:19:04.040 "enable": false 00:19:04.040 } 00:19:04.040 }, 00:19:04.040 { 00:19:04.040 "method": "bdev_enable_histogram", 00:19:04.040 "params": { 00:19:04.040 "name": "nvme0n1", 00:19:04.040 "enable": true 00:19:04.040 } 00:19:04.040 }, 00:19:04.040 { 00:19:04.040 "method": "bdev_wait_for_examine" 00:19:04.040 } 00:19:04.040 ] 00:19:04.040 }, 00:19:04.040 { 00:19:04.040 "subsystem": "nbd", 00:19:04.040 "config": [] 00:19:04.040 } 00:19:04.040 ] 00:19:04.040 }' 00:19:04.040 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2418499 00:19:04.040 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2418499 ']' 00:19:04.040 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2418499 00:19:04.040 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:04.040 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.040 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2418499 00:19:04.040 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:04.040 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:04.040 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2418499' 00:19:04.040 killing process with pid 2418499 00:19:04.040 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2418499 00:19:04.040 Received shutdown signal, test time was about 1.000000 seconds 00:19:04.040 00:19:04.040 Latency(us) 00:19:04.040 [2024-12-10T03:06:58.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.040 [2024-12-10T03:06:58.429Z] =================================================================================================================== 00:19:04.040 [2024-12-10T03:06:58.429Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:04.040 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2418499 00:19:04.298 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2418468 00:19:04.298 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2418468 ']' 00:19:04.298 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2418468 00:19:04.298 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:04.298 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.298 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2418468 00:19:04.298 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.298 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.298 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2418468' 00:19:04.298 killing process with pid 2418468 00:19:04.298 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2418468 00:19:04.298 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2418468 00:19:04.556 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:04.556 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:04.556 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:04.556 "subsystems": [ 00:19:04.556 { 00:19:04.556 "subsystem": "keyring", 00:19:04.556 "config": [ 00:19:04.556 { 00:19:04.556 "method": "keyring_file_add_key", 00:19:04.556 "params": { 00:19:04.556 "name": "key0", 00:19:04.556 "path": "/tmp/tmp.pZNQTltgiX" 00:19:04.556 } 00:19:04.556 } 00:19:04.556 ] 00:19:04.556 }, 00:19:04.556 { 00:19:04.556 "subsystem": "iobuf", 00:19:04.556 "config": [ 00:19:04.556 { 00:19:04.556 "method": "iobuf_set_options", 00:19:04.556 "params": { 00:19:04.556 "small_pool_count": 8192, 00:19:04.556 "large_pool_count": 1024, 00:19:04.556 "small_bufsize": 8192, 00:19:04.556 "large_bufsize": 135168, 00:19:04.556 "enable_numa": false 00:19:04.556 } 00:19:04.556 } 00:19:04.556 ] 00:19:04.556 }, 00:19:04.556 { 00:19:04.556 "subsystem": "sock", 00:19:04.556 "config": [ 00:19:04.556 { 00:19:04.556 "method": "sock_set_default_impl", 00:19:04.556 "params": { 00:19:04.556 "impl_name": "posix" 00:19:04.556 } 00:19:04.556 }, 00:19:04.556 { 00:19:04.556 "method": "sock_impl_set_options", 00:19:04.556 "params": { 00:19:04.556 "impl_name": "ssl", 00:19:04.556 "recv_buf_size": 4096, 00:19:04.556 "send_buf_size": 4096, 00:19:04.556 "enable_recv_pipe": true, 00:19:04.556 "enable_quickack": false, 00:19:04.556 "enable_placement_id": 0, 00:19:04.556 "enable_zerocopy_send_server": true, 00:19:04.556 "enable_zerocopy_send_client": false, 00:19:04.556 "zerocopy_threshold": 0, 00:19:04.556 "tls_version": 0, 00:19:04.556 "enable_ktls": false 00:19:04.556 } 00:19:04.556 }, 00:19:04.556 { 00:19:04.556 "method": "sock_impl_set_options", 00:19:04.556 "params": { 00:19:04.556 "impl_name": "posix", 00:19:04.556 "recv_buf_size": 2097152, 00:19:04.556 "send_buf_size": 2097152, 00:19:04.556 "enable_recv_pipe": true, 00:19:04.556 "enable_quickack": false, 00:19:04.556 "enable_placement_id": 0, 00:19:04.556 "enable_zerocopy_send_server": true, 00:19:04.556 "enable_zerocopy_send_client": false, 00:19:04.556 "zerocopy_threshold": 0, 00:19:04.556 "tls_version": 0, 00:19:04.556 "enable_ktls": false 00:19:04.556 } 00:19:04.556 } 00:19:04.556 ] 00:19:04.556 }, 00:19:04.556 { 00:19:04.556 "subsystem": "vmd", 00:19:04.556 "config": [] 00:19:04.556 }, 00:19:04.556 { 00:19:04.556 "subsystem": "accel", 00:19:04.556 "config": [ 00:19:04.556 { 00:19:04.556 "method": "accel_set_options", 00:19:04.556 "params": { 00:19:04.556 "small_cache_size": 128, 00:19:04.556 "large_cache_size": 16, 00:19:04.556 "task_count": 2048, 00:19:04.556 "sequence_count": 2048, 00:19:04.556 "buf_count": 2048 00:19:04.556 } 00:19:04.556 } 00:19:04.556 ] 00:19:04.556 }, 00:19:04.556 { 00:19:04.556 "subsystem": "bdev", 00:19:04.556 "config": [ 00:19:04.556 { 00:19:04.556 "method": "bdev_set_options", 00:19:04.556 "params": { 00:19:04.556 "bdev_io_pool_size": 65535, 00:19:04.556 "bdev_io_cache_size": 256, 00:19:04.556 "bdev_auto_examine": true, 00:19:04.556 "iobuf_small_cache_size": 128, 00:19:04.556 "iobuf_large_cache_size": 16 00:19:04.556 } 00:19:04.556 }, 00:19:04.556 { 00:19:04.556 "method": "bdev_raid_set_options", 00:19:04.556 "params": { 00:19:04.556 "process_window_size_kb": 1024, 00:19:04.556 "process_max_bandwidth_mb_sec": 0 00:19:04.556 } 00:19:04.556 }, 00:19:04.556 { 00:19:04.556 "method": "bdev_iscsi_set_options", 00:19:04.556 "params": { 00:19:04.556 "timeout_sec": 30 00:19:04.556 } 00:19:04.556 }, 00:19:04.556 { 00:19:04.556 "method": "bdev_nvme_set_options", 00:19:04.556 "params": { 00:19:04.556 "action_on_timeout": "none", 00:19:04.556 "timeout_us": 0, 00:19:04.556 "timeout_admin_us": 0, 00:19:04.556 "keep_alive_timeout_ms": 10000, 00:19:04.556 "arbitration_burst": 0, 00:19:04.556 "low_priority_weight": 0, 00:19:04.556 "medium_priority_weight": 0, 00:19:04.556 "high_priority_weight": 0, 00:19:04.556 "nvme_adminq_poll_period_us": 10000, 00:19:04.556 "nvme_ioq_poll_period_us": 0, 00:19:04.556 "io_queue_requests": 0, 00:19:04.556 "delay_cmd_submit": true, 00:19:04.556 "transport_retry_count": 4, 00:19:04.556 "bdev_retry_count": 3, 00:19:04.556 "transport_ack_timeout": 0, 00:19:04.556 "ctrlr_loss_timeout_sec": 0, 00:19:04.556 "reconnect_delay_sec": 0, 00:19:04.556 "fast_io_fail_timeout_sec": 0, 00:19:04.556 "disable_auto_failback": false, 00:19:04.556 "generate_uuids": false, 00:19:04.556 "transport_tos": 0, 00:19:04.557 "nvme_error_stat": false, 00:19:04.557 "rdma_srq_size": 0, 00:19:04.557 "io_path_stat": false, 00:19:04.557 "allow_accel_sequence": false, 00:19:04.557 "rdma_max_cq_size": 0, 00:19:04.557 "rdma_cm_event_timeout_ms": 0, 00:19:04.557 "dhchap_digests": [ 00:19:04.557 "sha256", 00:19:04.557 "sha384", 00:19:04.557 "sha512" 00:19:04.557 ], 00:19:04.557 "dhchap_dhgroups": [ 00:19:04.557 "null", 00:19:04.557 "ffdhe2048", 00:19:04.557 "ffdhe3072", 00:19:04.557 "ffdhe4096", 00:19:04.557 "ffdhe6144", 00:19:04.557 "ffdhe8192" 00:19:04.557 ] 00:19:04.557 } 00:19:04.557 }, 00:19:04.557 { 00:19:04.557 "method": "bdev_nvme_set_hotplug", 00:19:04.557 "params": { 00:19:04.557 "period_us": 100000, 00:19:04.557 "enable": false 00:19:04.557 } 00:19:04.557 }, 00:19:04.557 { 00:19:04.557 "method": "bdev_malloc_create", 00:19:04.557 "params": { 00:19:04.557 "name": "malloc0", 00:19:04.557 "num_blocks": 8192, 00:19:04.557 "block_size": 4096, 00:19:04.557 "physical_block_size": 4096, 00:19:04.557 "uuid": "d498d850-a1fa-405b-96d3-c94fe2cf8421", 00:19:04.557 "optimal_io_boundary": 0, 00:19:04.557 "md_size": 0, 00:19:04.557 "dif_type": 0, 00:19:04.557 "dif_is_head_of_md": false, 00:19:04.557 "dif_pi_format": 0 00:19:04.557 } 00:19:04.557 }, 00:19:04.557 { 00:19:04.557 "method": "bdev_wait_for_examine" 00:19:04.557 } 00:19:04.557 ] 00:19:04.557 }, 00:19:04.557 { 00:19:04.557 "subsystem": "nbd", 00:19:04.557 "config": [] 00:19:04.557 }, 00:19:04.557 { 00:19:04.557 "subsystem": "scheduler", 00:19:04.557 "config": [ 00:19:04.557 { 00:19:04.557 "method": "framework_set_scheduler", 00:19:04.557 "params": { 00:19:04.557 "name": "static" 00:19:04.557 } 00:19:04.557 } 00:19:04.557 ] 00:19:04.557 }, 00:19:04.557 { 00:19:04.557 "subsystem": "nvmf", 00:19:04.557 "config": [ 00:19:04.557 { 00:19:04.557 "method": "nvmf_set_config", 00:19:04.557 "params": { 00:19:04.557 "discovery_filter": "match_any", 00:19:04.557 "admin_cmd_passthru": { 00:19:04.557 "identify_ctrlr": false 00:19:04.557 }, 00:19:04.557 "dhchap_digests": [ 00:19:04.557 "sha256", 00:19:04.557 "sha384", 00:19:04.557 "sha512" 00:19:04.557 ], 00:19:04.557 "dhchap_dhgroups": [ 00:19:04.557 "null", 00:19:04.557 "ffdhe2048", 00:19:04.557 "ffdhe3072", 00:19:04.557 "ffdhe4096", 00:19:04.557 "ffdhe6144", 00:19:04.557 "ffdhe8192" 00:19:04.557 ] 00:19:04.557 } 00:19:04.557 }, 00:19:04.557 { 00:19:04.557 "method": "nvmf_set_max_subsystems", 00:19:04.557 "params": { 00:19:04.557 "max_subsystems": 1024 00:19:04.557 } 00:19:04.557 }, 00:19:04.557 { 00:19:04.557 "method": "nvmf_set_crdt", 00:19:04.557 "params": { 00:19:04.557 "crdt1": 0, 00:19:04.557 "crdt2": 0, 00:19:04.557 "crdt3": 0 00:19:04.557 } 00:19:04.557 }, 00:19:04.557 { 00:19:04.557 "method": "nvmf_create_transport", 00:19:04.557 "params": { 00:19:04.557 "trtype": "TCP", 00:19:04.557 "max_queue_depth": 128, 00:19:04.557 "max_io_qpairs_per_ctrlr": 127, 00:19:04.557 "in_capsule_data_size": 4096, 00:19:04.557 "max_io_size": 131072, 00:19:04.557 "io_unit_size": 131072, 00:19:04.557 "max_aq_depth": 128, 00:19:04.557 "num_shared_buffers": 511, 00:19:04.557 "buf_cache_size": 4294967295, 00:19:04.557 "dif_insert_or_strip": false, 00:19:04.557 "zcopy": false, 00:19:04.557 "c2h_success": false, 00:19:04.557 "sock_priority": 0, 00:19:04.557 "abort_timeout_sec": 1, 00:19:04.557 "ack_timeout": 0, 00:19:04.557 "data_wr_pool_size": 0 00:19:04.557 } 00:19:04.557 }, 00:19:04.557 { 00:19:04.557 "method": "nvmf_create_subsystem", 00:19:04.557 "params": { 00:19:04.557 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.557 "allow_any_host": false, 00:19:04.557 "serial_number": "00000000000000000000", 00:19:04.557 "model_number": "SPDK bdev Controller", 00:19:04.557 "max_namespaces": 32, 00:19:04.557 "min_cntlid": 1, 00:19:04.557 "max_cntlid": 65519, 00:19:04.557 "ana_reporting": false 00:19:04.557 } 00:19:04.557 }, 00:19:04.557 { 00:19:04.557 "method": "nvmf_subsystem_add_host", 00:19:04.557 "params": { 00:19:04.557 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.557 "host": "nqn.2016-06.io.spdk:host1", 00:19:04.557 "psk": "key0" 00:19:04.557 } 00:19:04.557 }, 00:19:04.557 { 00:19:04.557 "method": "nvmf_subsystem_add_ns", 00:19:04.557 "params": { 00:19:04.557 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.557 "namespace": { 00:19:04.557 "nsid": 1, 00:19:04.557 "bdev_name": "malloc0", 00:19:04.557 "nguid": "D498D850A1FA405B96D3C94FE2CF8421", 00:19:04.557 "uuid": "d498d850-a1fa-405b-96d3-c94fe2cf8421", 00:19:04.557 "no_auto_visible": false 00:19:04.557 } 00:19:04.557 } 00:19:04.557 }, 00:19:04.557 { 00:19:04.557 "method": "nvmf_subsystem_add_listener", 00:19:04.557 "params": { 00:19:04.557 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.557 "listen_address": { 00:19:04.557 "trtype": "TCP", 00:19:04.557 "adrfam": "IPv4", 00:19:04.557 "traddr": "10.0.0.2", 00:19:04.557 "trsvcid": "4420" 00:19:04.557 }, 00:19:04.557 "secure_channel": false, 00:19:04.557 "sock_impl": "ssl" 00:19:04.557 } 00:19:04.557 } 00:19:04.557 ] 00:19:04.557 } 00:19:04.557 ] 00:19:04.557 }' 00:19:04.557 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:04.557 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.557 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2418899 00:19:04.557 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:04.557 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2418899 00:19:04.557 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2418899 ']' 00:19:04.557 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.557 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.557 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.557 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.557 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.557 [2024-12-10 04:06:58.909772] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:04.557 [2024-12-10 04:06:58.909863] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.816 [2024-12-10 04:06:58.980168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.816 [2024-12-10 04:06:59.030048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.816 [2024-12-10 04:06:59.030109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.816 [2024-12-10 04:06:59.030138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.816 [2024-12-10 04:06:59.030149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.816 [2024-12-10 04:06:59.030159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.816 [2024-12-10 04:06:59.030782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.075 [2024-12-10 04:06:59.278730] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.075 [2024-12-10 04:06:59.310744] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:05.075 [2024-12-10 04:06:59.311029] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.641 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.641 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:05.641 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:05.641 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:05.641 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.641 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.641 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2419053 00:19:05.641 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2419053 /var/tmp/bdevperf.sock 00:19:05.641 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2419053 ']' 00:19:05.641 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.641 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:05.641 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.641 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:05.641 "subsystems": [ 00:19:05.641 { 00:19:05.641 "subsystem": "keyring", 00:19:05.641 "config": [ 00:19:05.641 { 00:19:05.641 "method": "keyring_file_add_key", 00:19:05.641 "params": { 00:19:05.641 "name": "key0", 00:19:05.641 "path": "/tmp/tmp.pZNQTltgiX" 00:19:05.641 } 00:19:05.641 } 00:19:05.641 ] 00:19:05.641 }, 00:19:05.641 { 00:19:05.641 "subsystem": "iobuf", 00:19:05.641 "config": [ 00:19:05.641 { 00:19:05.641 "method": "iobuf_set_options", 00:19:05.641 "params": { 00:19:05.641 "small_pool_count": 8192, 00:19:05.641 "large_pool_count": 1024, 00:19:05.641 "small_bufsize": 8192, 00:19:05.641 "large_bufsize": 135168, 00:19:05.641 "enable_numa": false 00:19:05.641 } 00:19:05.641 } 00:19:05.641 ] 00:19:05.641 }, 00:19:05.641 { 00:19:05.641 "subsystem": "sock", 00:19:05.641 "config": [ 00:19:05.641 { 00:19:05.641 "method": "sock_set_default_impl", 00:19:05.641 "params": { 00:19:05.641 "impl_name": "posix" 00:19:05.641 } 00:19:05.641 }, 00:19:05.641 { 00:19:05.641 "method": "sock_impl_set_options", 00:19:05.641 "params": { 00:19:05.641 "impl_name": "ssl", 00:19:05.641 "recv_buf_size": 4096, 00:19:05.641 "send_buf_size": 4096, 00:19:05.641 "enable_recv_pipe": true, 00:19:05.641 "enable_quickack": false, 00:19:05.641 "enable_placement_id": 0, 00:19:05.641 "enable_zerocopy_send_server": true, 00:19:05.641 "enable_zerocopy_send_client": false, 00:19:05.641 "zerocopy_threshold": 0, 00:19:05.641 "tls_version": 0, 00:19:05.641 "enable_ktls": false 00:19:05.641 } 00:19:05.642 }, 00:19:05.642 { 00:19:05.642 "method": "sock_impl_set_options", 00:19:05.642 "params": { 00:19:05.642 "impl_name": "posix", 00:19:05.642 "recv_buf_size": 2097152, 00:19:05.642 "send_buf_size": 2097152, 00:19:05.642 "enable_recv_pipe": true, 00:19:05.642 "enable_quickack": false, 00:19:05.642 "enable_placement_id": 0, 00:19:05.642 "enable_zerocopy_send_server": true, 00:19:05.642 "enable_zerocopy_send_client": false, 00:19:05.642 "zerocopy_threshold": 0, 00:19:05.642 "tls_version": 0, 00:19:05.642 "enable_ktls": false 00:19:05.642 } 00:19:05.642 } 00:19:05.642 ] 00:19:05.642 }, 00:19:05.642 { 00:19:05.642 "subsystem": "vmd", 00:19:05.642 "config": [] 00:19:05.642 }, 00:19:05.642 { 00:19:05.642 "subsystem": "accel", 00:19:05.642 "config": [ 00:19:05.642 { 00:19:05.642 "method": "accel_set_options", 00:19:05.642 "params": { 00:19:05.642 "small_cache_size": 128, 00:19:05.642 "large_cache_size": 16, 00:19:05.642 "task_count": 2048, 00:19:05.642 "sequence_count": 2048, 00:19:05.642 "buf_count": 2048 00:19:05.642 } 00:19:05.642 } 00:19:05.642 ] 00:19:05.642 }, 00:19:05.642 { 00:19:05.642 "subsystem": "bdev", 00:19:05.642 "config": [ 00:19:05.642 { 00:19:05.642 "method": "bdev_set_options", 00:19:05.642 "params": { 00:19:05.642 "bdev_io_pool_size": 65535, 00:19:05.642 "bdev_io_cache_size": 256, 00:19:05.642 "bdev_auto_examine": true, 00:19:05.642 "iobuf_small_cache_size": 128, 00:19:05.642 "iobuf_large_cache_size": 16 00:19:05.642 } 00:19:05.642 }, 00:19:05.642 { 00:19:05.642 "method": "bdev_raid_set_options", 00:19:05.642 "params": { 00:19:05.642 "process_window_size_kb": 1024, 00:19:05.642 "process_max_bandwidth_mb_sec": 0 00:19:05.642 } 00:19:05.642 }, 00:19:05.642 { 00:19:05.642 "method": "bdev_iscsi_set_options", 00:19:05.642 "params": { 00:19:05.642 "timeout_sec": 30 00:19:05.642 } 00:19:05.642 }, 00:19:05.642 { 00:19:05.642 "method": "bdev_nvme_set_options", 00:19:05.642 "params": { 00:19:05.642 "action_on_timeout": "none", 00:19:05.642 "timeout_us": 0, 00:19:05.642 "timeout_admin_us": 0, 00:19:05.642 "keep_alive_timeout_ms": 10000, 00:19:05.642 "arbitration_burst": 0, 00:19:05.642 "low_priority_weight": 0, 00:19:05.642 "medium_priority_weight": 0, 00:19:05.642 "high_priority_weight": 0, 00:19:05.642 "nvme_adminq_poll_period_us": 10000, 00:19:05.642 "nvme_ioq_poll_period_us": 0, 00:19:05.642 "io_queue_requests": 512, 00:19:05.642 "delay_cmd_submit": true, 00:19:05.642 "transport_retry_count": 4, 00:19:05.642 "bdev_retry_count": 3, 00:19:05.642 "transport_ack_timeout": 0, 00:19:05.642 "ctrlr_loss_timeout_sec": 0, 00:19:05.642 "reconnect_delay_sec": 0, 00:19:05.642 "fast_io_fail_timeout_sec": 0, 00:19:05.642 "disable_auto_failback": false, 00:19:05.642 "generate_uuids": false, 00:19:05.642 "transport_tos": 0, 00:19:05.642 "nvme_error_stat": false, 00:19:05.642 "rdma_srq_size": 0, 00:19:05.642 "io_path_stat": false, 00:19:05.642 "allow_accel_sequence": false, 00:19:05.642 "rdma_max_cq_size": 0, 00:19:05.642 "rdma_cm_event_timeout_ms": 0, 00:19:05.642 "dhchap_digests": [ 00:19:05.642 "sha256", 00:19:05.642 "sha384", 00:19:05.642 "sha512" 00:19:05.642 ], 00:19:05.642 "dhchap_dhgroups": [ 00:19:05.642 "null", 00:19:05.642 "ffdhe2048", 00:19:05.642 "ffdhe3072", 00:19:05.642 "ffdhe4096", 00:19:05.642 "ffdhe6144", 00:19:05.642 "ffdhe8192" 00:19:05.642 ] 00:19:05.642 } 00:19:05.642 }, 00:19:05.642 { 00:19:05.642 "method": "bdev_nvme_attach_controller", 00:19:05.642 "params": { 00:19:05.642 "name": "nvme0", 00:19:05.642 "trtype": "TCP", 00:19:05.642 "adrfam": "IPv4", 00:19:05.642 "traddr": "10.0.0.2", 00:19:05.642 "trsvcid": "4420", 00:19:05.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.642 "prchk_reftag": false, 00:19:05.642 "prchk_guard": false, 00:19:05.642 "ctrlr_loss_timeout_sec": 0, 00:19:05.642 "reconnect_delay_sec": 0, 00:19:05.642 "fast_io_fail_timeout_sec": 0, 00:19:05.642 "psk": "key0", 00:19:05.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.642 "hdgst": false, 00:19:05.642 "ddgst": false, 00:19:05.642 "multipath": "multipath" 00:19:05.642 } 00:19:05.642 }, 00:19:05.642 { 00:19:05.642 "method": "bdev_nvme_set_hotplug", 00:19:05.642 "params": { 00:19:05.642 "period_us": 100000, 00:19:05.642 "enable": false 00:19:05.642 } 00:19:05.642 }, 00:19:05.642 { 00:19:05.642 "method": "bdev_enable_histogram", 00:19:05.642 "params": { 00:19:05.642 "name": "nvme0n1", 00:19:05.642 "enable": true 00:19:05.642 } 00:19:05.642 }, 00:19:05.642 { 00:19:05.642 "method": "bdev_wait_for_examine" 00:19:05.642 } 00:19:05.642 ] 00:19:05.642 }, 00:19:05.642 { 00:19:05.642 "subsystem": "nbd", 00:19:05.642 "config": [] 00:19:05.642 } 00:19:05.642 ] 00:19:05.642 }' 00:19:05.642 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.642 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.642 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.642 [2024-12-10 04:06:59.976075] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:05.642 [2024-12-10 04:06:59.976159] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2419053 ] 00:19:05.905 [2024-12-10 04:07:00.046788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.905 [2024-12-10 04:07:00.106384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.194 [2024-12-10 04:07:00.290780] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:06.194 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.194 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:06.194 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:06.194 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:06.477 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.477 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:06.477 Running I/O for 1 seconds... 00:19:07.858 3446.00 IOPS, 13.46 MiB/s 00:19:07.858 Latency(us) 00:19:07.858 [2024-12-10T03:07:02.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.858 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:07.858 Verification LBA range: start 0x0 length 0x2000 00:19:07.858 nvme0n1 : 1.02 3501.41 13.68 0.00 0.00 36199.08 8592.50 46020.84 00:19:07.858 [2024-12-10T03:07:02.247Z] =================================================================================================================== 00:19:07.858 [2024-12-10T03:07:02.247Z] Total : 3501.41 13.68 0.00 0.00 36199.08 8592.50 46020.84 00:19:07.858 { 00:19:07.858 "results": [ 00:19:07.858 { 00:19:07.858 "job": "nvme0n1", 00:19:07.858 "core_mask": "0x2", 00:19:07.858 "workload": "verify", 00:19:07.858 "status": "finished", 00:19:07.858 "verify_range": { 00:19:07.858 "start": 0, 00:19:07.858 "length": 8192 00:19:07.858 }, 00:19:07.858 "queue_depth": 128, 00:19:07.858 "io_size": 4096, 00:19:07.858 "runtime": 1.020733, 00:19:07.858 "iops": 3501.4053626168647, 00:19:07.858 "mibps": 13.677364697722128, 00:19:07.858 "io_failed": 0, 00:19:07.858 "io_timeout": 0, 00:19:07.858 "avg_latency_us": 36199.0810537006, 00:19:07.858 "min_latency_us": 8592.497777777779, 00:19:07.858 "max_latency_us": 46020.83555555555 00:19:07.858 } 00:19:07.858 ], 00:19:07.858 "core_count": 1 00:19:07.858 } 00:19:07.858 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:07.858 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:07.858 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:07.858 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:07.858 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:07.858 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:07.858 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:07.858 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:07.858 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:07.858 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:07.858 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:07.858 nvmf_trace.0 00:19:07.858 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:07.858 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2419053 00:19:07.858 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2419053 ']' 00:19:07.858 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2419053 00:19:07.858 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:07.859 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.859 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2419053 00:19:07.859 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:07.859 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:07.859 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2419053' 00:19:07.859 killing process with pid 2419053 00:19:07.859 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2419053 00:19:07.859 Received shutdown signal, test time was about 1.000000 seconds 00:19:07.859 00:19:07.859 Latency(us) 00:19:07.859 [2024-12-10T03:07:02.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.859 [2024-12-10T03:07:02.248Z] =================================================================================================================== 00:19:07.859 [2024-12-10T03:07:02.248Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:07.859 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2419053 00:19:07.859 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:07.859 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:07.859 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:07.859 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:07.859 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:07.859 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:07.859 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:07.859 rmmod nvme_tcp 00:19:07.859 rmmod nvme_fabrics 00:19:07.859 rmmod nvme_keyring 00:19:08.118 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:08.118 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:08.118 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:08.118 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2418899 ']' 00:19:08.118 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2418899 00:19:08.118 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2418899 ']' 00:19:08.118 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2418899 00:19:08.118 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:08.118 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:08.118 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2418899 00:19:08.118 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:08.118 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:08.118 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2418899' 00:19:08.118 killing process with pid 2418899 00:19:08.118 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2418899 00:19:08.118 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2418899 00:19:08.376 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:08.376 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:08.376 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:08.376 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:08.376 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:08.376 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:08.376 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:08.376 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:08.376 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:08.376 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.376 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:08.376 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.279 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:10.279 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TPrzIGVQm8 /tmp/tmp.Jg5uVC231S /tmp/tmp.pZNQTltgiX 00:19:10.279 00:19:10.279 real 1m22.842s 00:19:10.279 user 2m20.189s 00:19:10.279 sys 0m24.173s 00:19:10.279 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.279 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.279 ************************************ 00:19:10.279 END TEST nvmf_tls 00:19:10.279 ************************************ 00:19:10.279 04:07:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:10.279 04:07:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:10.279 04:07:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.279 04:07:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:10.279 ************************************ 00:19:10.279 START TEST nvmf_fips 00:19:10.279 ************************************ 00:19:10.279 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:10.538 * Looking for test storage... 00:19:10.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:10.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.538 --rc genhtml_branch_coverage=1 00:19:10.538 --rc genhtml_function_coverage=1 00:19:10.538 --rc genhtml_legend=1 00:19:10.538 --rc geninfo_all_blocks=1 00:19:10.538 --rc geninfo_unexecuted_blocks=1 00:19:10.538 00:19:10.538 ' 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:10.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.538 --rc genhtml_branch_coverage=1 00:19:10.538 --rc genhtml_function_coverage=1 00:19:10.538 --rc genhtml_legend=1 00:19:10.538 --rc geninfo_all_blocks=1 00:19:10.538 --rc geninfo_unexecuted_blocks=1 00:19:10.538 00:19:10.538 ' 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:10.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.538 --rc genhtml_branch_coverage=1 00:19:10.538 --rc genhtml_function_coverage=1 00:19:10.538 --rc genhtml_legend=1 00:19:10.538 --rc geninfo_all_blocks=1 00:19:10.538 --rc geninfo_unexecuted_blocks=1 00:19:10.538 00:19:10.538 ' 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:10.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.538 --rc genhtml_branch_coverage=1 00:19:10.538 --rc genhtml_function_coverage=1 00:19:10.538 --rc genhtml_legend=1 00:19:10.538 --rc geninfo_all_blocks=1 00:19:10.538 --rc geninfo_unexecuted_blocks=1 00:19:10.538 00:19:10.538 ' 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.538 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:10.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:10.539 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:10.540 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:10.540 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:10.540 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:10.798 Error setting digest 00:19:10.798 40026F388D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:10.798 40026F388D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:10.798 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.701 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:12.702 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:12.702 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:12.702 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:12.702 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:12.702 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:12.702 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:12.702 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:12.702 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:12.702 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:12.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:19:12.961 00:19:12.961 --- 10.0.0.2 ping statistics --- 00:19:12.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.961 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:12.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:19:12.961 00:19:12.961 --- 10.0.0.1 ping statistics --- 00:19:12.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.961 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2421294 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2421294 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2421294 ']' 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.961 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:12.961 [2024-12-10 04:07:07.209598] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:12.961 [2024-12-10 04:07:07.209675] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.961 [2024-12-10 04:07:07.283639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.961 [2024-12-10 04:07:07.341757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.961 [2024-12-10 04:07:07.341809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.961 [2024-12-10 04:07:07.341823] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.961 [2024-12-10 04:07:07.341835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.961 [2024-12-10 04:07:07.341845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.961 [2024-12-10 04:07:07.342408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.219 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.219 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:13.219 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:13.219 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:13.219 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:13.219 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.219 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:13.219 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:13.219 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:13.219 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.REP 00:19:13.219 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:13.219 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.REP 00:19:13.219 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.REP 00:19:13.219 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.REP 00:19:13.219 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:13.477 [2024-12-10 04:07:07.730716] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.477 [2024-12-10 04:07:07.746735] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:13.477 [2024-12-10 04:07:07.746997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.477 malloc0 00:19:13.477 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.477 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2421444 00:19:13.477 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:13.477 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2421444 /var/tmp/bdevperf.sock 00:19:13.477 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2421444 ']' 00:19:13.477 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.477 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.477 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.477 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.477 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:13.737 [2024-12-10 04:07:07.880676] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:13.737 [2024-12-10 04:07:07.880775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2421444 ] 00:19:13.737 [2024-12-10 04:07:07.947277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.737 [2024-12-10 04:07:08.003422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.737 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.737 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:13.737 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.REP 00:19:14.303 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:14.303 [2024-12-10 04:07:08.655648] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.561 TLSTESTn1 00:19:14.561 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:14.561 Running I/O for 10 seconds... 00:19:16.874 3551.00 IOPS, 13.87 MiB/s [2024-12-10T03:07:12.200Z] 3546.00 IOPS, 13.85 MiB/s [2024-12-10T03:07:13.140Z] 3553.33 IOPS, 13.88 MiB/s [2024-12-10T03:07:14.075Z] 3564.75 IOPS, 13.92 MiB/s [2024-12-10T03:07:15.015Z] 3546.40 IOPS, 13.85 MiB/s [2024-12-10T03:07:15.949Z] 3559.17 IOPS, 13.90 MiB/s [2024-12-10T03:07:16.886Z] 3569.86 IOPS, 13.94 MiB/s [2024-12-10T03:07:18.263Z] 3580.50 IOPS, 13.99 MiB/s [2024-12-10T03:07:19.200Z] 3579.78 IOPS, 13.98 MiB/s [2024-12-10T03:07:19.200Z] 3588.90 IOPS, 14.02 MiB/s 00:19:24.811 Latency(us) 00:19:24.811 [2024-12-10T03:07:19.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.811 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:24.811 Verification LBA range: start 0x0 length 0x2000 00:19:24.811 TLSTESTn1 : 10.02 3594.26 14.04 0.00 0.00 35550.18 7864.32 33593.27 00:19:24.811 [2024-12-10T03:07:19.200Z] =================================================================================================================== 00:19:24.811 [2024-12-10T03:07:19.200Z] Total : 3594.26 14.04 0.00 0.00 35550.18 7864.32 33593.27 00:19:24.811 { 00:19:24.811 "results": [ 00:19:24.811 { 00:19:24.811 "job": "TLSTESTn1", 00:19:24.811 "core_mask": "0x4", 00:19:24.811 "workload": "verify", 00:19:24.811 "status": "finished", 00:19:24.811 "verify_range": { 00:19:24.811 "start": 0, 00:19:24.811 "length": 8192 00:19:24.811 }, 00:19:24.811 "queue_depth": 128, 00:19:24.811 "io_size": 4096, 00:19:24.811 "runtime": 10.020411, 00:19:24.811 "iops": 3594.2637482634195, 00:19:24.811 "mibps": 14.040092766653983, 00:19:24.811 "io_failed": 0, 00:19:24.811 "io_timeout": 0, 00:19:24.812 "avg_latency_us": 35550.17718413215, 00:19:24.812 "min_latency_us": 7864.32, 00:19:24.812 "max_latency_us": 33593.26814814815 00:19:24.812 } 00:19:24.812 ], 00:19:24.812 "core_count": 1 00:19:24.812 } 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:24.812 nvmf_trace.0 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2421444 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2421444 ']' 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2421444 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2421444 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2421444' 00:19:24.812 killing process with pid 2421444 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2421444 00:19:24.812 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.812 00:19:24.812 Latency(us) 00:19:24.812 [2024-12-10T03:07:19.201Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.812 [2024-12-10T03:07:19.201Z] =================================================================================================================== 00:19:24.812 [2024-12-10T03:07:19.201Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:24.812 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2421444 00:19:25.072 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:25.072 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:25.072 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:25.072 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:25.072 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:25.072 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:25.072 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:25.072 rmmod nvme_tcp 00:19:25.072 rmmod nvme_fabrics 00:19:25.072 rmmod nvme_keyring 00:19:25.072 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:25.072 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:25.072 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:25.073 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2421294 ']' 00:19:25.073 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2421294 00:19:25.073 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2421294 ']' 00:19:25.073 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2421294 00:19:25.073 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:25.073 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.073 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2421294 00:19:25.073 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:25.073 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:25.073 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2421294' 00:19:25.073 killing process with pid 2421294 00:19:25.073 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2421294 00:19:25.073 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2421294 00:19:25.334 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:25.334 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:25.334 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:25.334 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:25.334 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:25.334 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:25.334 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:25.334 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:25.334 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:25.334 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.334 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.334 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.244 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:27.244 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.REP 00:19:27.244 00:19:27.244 real 0m16.953s 00:19:27.244 user 0m22.015s 00:19:27.244 sys 0m5.604s 00:19:27.244 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.244 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:27.244 ************************************ 00:19:27.244 END TEST nvmf_fips 00:19:27.244 ************************************ 00:19:27.245 04:07:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:27.245 04:07:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:27.245 04:07:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.245 04:07:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:27.504 ************************************ 00:19:27.504 START TEST nvmf_control_msg_list 00:19:27.504 ************************************ 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:27.504 * Looking for test storage... 00:19:27.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:27.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.504 --rc genhtml_branch_coverage=1 00:19:27.504 --rc genhtml_function_coverage=1 00:19:27.504 --rc genhtml_legend=1 00:19:27.504 --rc geninfo_all_blocks=1 00:19:27.504 --rc geninfo_unexecuted_blocks=1 00:19:27.504 00:19:27.504 ' 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:27.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.504 --rc genhtml_branch_coverage=1 00:19:27.504 --rc genhtml_function_coverage=1 00:19:27.504 --rc genhtml_legend=1 00:19:27.504 --rc geninfo_all_blocks=1 00:19:27.504 --rc geninfo_unexecuted_blocks=1 00:19:27.504 00:19:27.504 ' 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:27.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.504 --rc genhtml_branch_coverage=1 00:19:27.504 --rc genhtml_function_coverage=1 00:19:27.504 --rc genhtml_legend=1 00:19:27.504 --rc geninfo_all_blocks=1 00:19:27.504 --rc geninfo_unexecuted_blocks=1 00:19:27.504 00:19:27.504 ' 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:27.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.504 --rc genhtml_branch_coverage=1 00:19:27.504 --rc genhtml_function_coverage=1 00:19:27.504 --rc genhtml_legend=1 00:19:27.504 --rc geninfo_all_blocks=1 00:19:27.504 --rc geninfo_unexecuted_blocks=1 00:19:27.504 00:19:27.504 ' 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.504 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:27.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:27.505 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:30.036 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:30.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:30.036 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:30.036 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.036 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.036 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.036 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.036 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:30.036 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.036 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.036 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.036 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:30.036 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:30.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:19:30.036 00:19:30.036 --- 10.0.0.2 ping statistics --- 00:19:30.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.036 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:19:30.036 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:19:30.037 00:19:30.037 --- 10.0.0.1 ping statistics --- 00:19:30.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.037 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2424711 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2424711 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2424711 ']' 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.037 [2024-12-10 04:07:24.157450] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:30.037 [2024-12-10 04:07:24.157567] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.037 [2024-12-10 04:07:24.234103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.037 [2024-12-10 04:07:24.290586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.037 [2024-12-10 04:07:24.290660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.037 [2024-12-10 04:07:24.290683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.037 [2024-12-10 04:07:24.290694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.037 [2024-12-10 04:07:24.290704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.037 [2024-12-10 04:07:24.291269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:30.037 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.297 [2024-12-10 04:07:24.428775] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.297 Malloc0 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.297 [2024-12-10 04:07:24.468186] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2424851 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2424852 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2424853 00:19:30.297 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2424851 00:19:30.298 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:30.298 [2024-12-10 04:07:24.537027] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:30.298 [2024-12-10 04:07:24.537358] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:30.298 [2024-12-10 04:07:24.547002] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:31.234 Initializing NVMe Controllers 00:19:31.234 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:31.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:31.234 Initialization complete. Launching workers. 00:19:31.234 ======================================================== 00:19:31.234 Latency(us) 00:19:31.234 Device Information : IOPS MiB/s Average min max 00:19:31.234 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40901.74 40811.56 40952.90 00:19:31.234 ======================================================== 00:19:31.234 Total : 25.00 0.10 40901.74 40811.56 40952.90 00:19:31.234 00:19:31.493 Initializing NVMe Controllers 00:19:31.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:31.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:31.493 Initialization complete. Launching workers. 00:19:31.493 ======================================================== 00:19:31.493 Latency(us) 00:19:31.493 Device Information : IOPS MiB/s Average min max 00:19:31.493 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40925.30 40550.20 41890.02 00:19:31.493 ======================================================== 00:19:31.493 Total : 25.00 0.10 40925.30 40550.20 41890.02 00:19:31.493 00:19:31.493 Initializing NVMe Controllers 00:19:31.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:31.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:31.493 Initialization complete. Launching workers. 00:19:31.493 ======================================================== 00:19:31.493 Latency(us) 00:19:31.493 Device Information : IOPS MiB/s Average min max 00:19:31.493 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3884.00 15.17 257.03 159.01 570.29 00:19:31.493 ======================================================== 00:19:31.493 Total : 3884.00 15.17 257.03 159.01 570.29 00:19:31.493 00:19:31.493 [2024-12-10 04:07:25.720233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5570 is same with the state(6) to be set 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2424852 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2424853 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:31.493 rmmod nvme_tcp 00:19:31.493 rmmod nvme_fabrics 00:19:31.493 rmmod nvme_keyring 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2424711 ']' 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2424711 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2424711 ']' 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2424711 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2424711 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2424711' 00:19:31.493 killing process with pid 2424711 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2424711 00:19:31.493 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2424711 00:19:31.753 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:31.753 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:31.753 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:31.753 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:31.753 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:31.753 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:31.753 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:31.753 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:31.753 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:31.753 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.753 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.753 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:34.290 00:19:34.290 real 0m6.452s 00:19:34.290 user 0m5.785s 00:19:34.290 sys 0m2.618s 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:34.290 ************************************ 00:19:34.290 END TEST nvmf_control_msg_list 00:19:34.290 ************************************ 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:34.290 ************************************ 00:19:34.290 START TEST nvmf_wait_for_buf 00:19:34.290 ************************************ 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:34.290 * Looking for test storage... 00:19:34.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:34.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.290 --rc genhtml_branch_coverage=1 00:19:34.290 --rc genhtml_function_coverage=1 00:19:34.290 --rc genhtml_legend=1 00:19:34.290 --rc geninfo_all_blocks=1 00:19:34.290 --rc geninfo_unexecuted_blocks=1 00:19:34.290 00:19:34.290 ' 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:34.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.290 --rc genhtml_branch_coverage=1 00:19:34.290 --rc genhtml_function_coverage=1 00:19:34.290 --rc genhtml_legend=1 00:19:34.290 --rc geninfo_all_blocks=1 00:19:34.290 --rc geninfo_unexecuted_blocks=1 00:19:34.290 00:19:34.290 ' 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:34.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.290 --rc genhtml_branch_coverage=1 00:19:34.290 --rc genhtml_function_coverage=1 00:19:34.290 --rc genhtml_legend=1 00:19:34.290 --rc geninfo_all_blocks=1 00:19:34.290 --rc geninfo_unexecuted_blocks=1 00:19:34.290 00:19:34.290 ' 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:34.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.290 --rc genhtml_branch_coverage=1 00:19:34.290 --rc genhtml_function_coverage=1 00:19:34.290 --rc genhtml_legend=1 00:19:34.290 --rc geninfo_all_blocks=1 00:19:34.290 --rc geninfo_unexecuted_blocks=1 00:19:34.290 00:19:34.290 ' 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.290 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:34.291 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:34.291 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:34.291 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.191 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:36.192 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:36.192 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:36.192 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:36.192 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:36.192 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:36.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:19:36.450 00:19:36.450 --- 10.0.0.2 ping statistics --- 00:19:36.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.450 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:19:36.450 00:19:36.450 --- 10.0.0.1 ping statistics --- 00:19:36.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.450 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2426932 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2426932 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2426932 ']' 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.450 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.450 [2024-12-10 04:07:30.697446] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:36.450 [2024-12-10 04:07:30.697534] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.450 [2024-12-10 04:07:30.769875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.450 [2024-12-10 04:07:30.825263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.450 [2024-12-10 04:07:30.825318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.450 [2024-12-10 04:07:30.825347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.450 [2024-12-10 04:07:30.825359] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.450 [2024-12-10 04:07:30.825369] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.450 [2024-12-10 04:07:30.825980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.709 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.709 Malloc0 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.709 [2024-12-10 04:07:31.068589] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.709 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.968 [2024-12-10 04:07:31.092839] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.968 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.968 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:36.968 [2024-12-10 04:07:31.177680] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:38.344 Initializing NVMe Controllers 00:19:38.344 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:38.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:38.344 Initialization complete. Launching workers. 00:19:38.344 ======================================================== 00:19:38.344 Latency(us) 00:19:38.344 Device Information : IOPS MiB/s Average min max 00:19:38.344 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 125.00 15.62 33293.96 23983.25 63849.70 00:19:38.344 ======================================================== 00:19:38.344 Total : 125.00 15.62 33293.96 23983.25 63849.70 00:19:38.344 00:19:38.344 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:38.344 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:38.344 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.344 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:38.344 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.344 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1974 00:19:38.344 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1974 -eq 0 ]] 00:19:38.344 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:38.344 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:38.344 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.344 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:38.344 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:38.344 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:38.344 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.344 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:38.344 rmmod nvme_tcp 00:19:38.345 rmmod nvme_fabrics 00:19:38.345 rmmod nvme_keyring 00:19:38.345 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.345 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:38.345 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:38.345 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2426932 ']' 00:19:38.345 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2426932 00:19:38.345 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2426932 ']' 00:19:38.345 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2426932 00:19:38.345 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:38.345 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.345 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2426932 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2426932' 00:19:38.604 killing process with pid 2426932 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2426932 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2426932 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.604 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.140 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:41.140 00:19:41.140 real 0m6.868s 00:19:41.140 user 0m3.272s 00:19:41.140 sys 0m2.069s 00:19:41.140 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.140 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:41.140 ************************************ 00:19:41.140 END TEST nvmf_wait_for_buf 00:19:41.140 ************************************ 00:19:41.140 04:07:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:41.140 04:07:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:41.140 04:07:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:41.140 04:07:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:41.140 04:07:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:41.140 04:07:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:43.118 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:43.118 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:43.118 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:43.118 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:43.118 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:43.118 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:43.118 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:43.119 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:43.119 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:43.119 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:43.119 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:43.119 ************************************ 00:19:43.119 START TEST nvmf_perf_adq 00:19:43.119 ************************************ 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:43.119 * Looking for test storage... 00:19:43.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:43.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.119 --rc genhtml_branch_coverage=1 00:19:43.119 --rc genhtml_function_coverage=1 00:19:43.119 --rc genhtml_legend=1 00:19:43.119 --rc geninfo_all_blocks=1 00:19:43.119 --rc geninfo_unexecuted_blocks=1 00:19:43.119 00:19:43.119 ' 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:43.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.119 --rc genhtml_branch_coverage=1 00:19:43.119 --rc genhtml_function_coverage=1 00:19:43.119 --rc genhtml_legend=1 00:19:43.119 --rc geninfo_all_blocks=1 00:19:43.119 --rc geninfo_unexecuted_blocks=1 00:19:43.119 00:19:43.119 ' 00:19:43.119 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:43.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.119 --rc genhtml_branch_coverage=1 00:19:43.120 --rc genhtml_function_coverage=1 00:19:43.120 --rc genhtml_legend=1 00:19:43.120 --rc geninfo_all_blocks=1 00:19:43.120 --rc geninfo_unexecuted_blocks=1 00:19:43.120 00:19:43.120 ' 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:43.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.120 --rc genhtml_branch_coverage=1 00:19:43.120 --rc genhtml_function_coverage=1 00:19:43.120 --rc genhtml_legend=1 00:19:43.120 --rc geninfo_all_blocks=1 00:19:43.120 --rc geninfo_unexecuted_blocks=1 00:19:43.120 00:19:43.120 ' 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:43.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:43.120 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:45.654 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.654 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:45.655 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:45.655 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:45.655 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:45.655 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:45.655 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:45.914 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:48.452 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:53.727 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:53.727 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:53.727 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:53.727 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:53.727 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:53.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:19:53.728 00:19:53.728 --- 10.0.0.2 ping statistics --- 00:19:53.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.728 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:53.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:19:53.728 00:19:53.728 --- 10.0.0.1 ping statistics --- 00:19:53.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.728 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2431780 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2431780 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2431780 ']' 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.728 [2024-12-10 04:07:47.715116] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:53.728 [2024-12-10 04:07:47.715189] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.728 [2024-12-10 04:07:47.790159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:53.728 [2024-12-10 04:07:47.849537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.728 [2024-12-10 04:07:47.849623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.728 [2024-12-10 04:07:47.849652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.728 [2024-12-10 04:07:47.849663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.728 [2024-12-10 04:07:47.849673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.728 [2024-12-10 04:07:47.851383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.728 [2024-12-10 04:07:47.851463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:53.728 [2024-12-10 04:07:47.851466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.728 [2024-12-10 04:07:47.851406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.728 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.728 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.728 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:53.728 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:53.728 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:53.728 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.728 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.728 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.728 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:53.728 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:53.728 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.728 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.728 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.728 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:53.728 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.728 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.988 [2024-12-10 04:07:48.168807] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.988 Malloc1 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.988 [2024-12-10 04:07:48.235766] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2431927 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:53.988 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:55.890 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:55.890 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.890 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.890 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.890 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:55.890 "tick_rate": 2700000000, 00:19:55.890 "poll_groups": [ 00:19:55.890 { 00:19:55.890 "name": "nvmf_tgt_poll_group_000", 00:19:55.890 "admin_qpairs": 1, 00:19:55.890 "io_qpairs": 1, 00:19:55.890 "current_admin_qpairs": 1, 00:19:55.890 "current_io_qpairs": 1, 00:19:55.890 "pending_bdev_io": 0, 00:19:55.890 "completed_nvme_io": 18868, 00:19:55.890 "transports": [ 00:19:55.890 { 00:19:55.890 "trtype": "TCP" 00:19:55.890 } 00:19:55.890 ] 00:19:55.890 }, 00:19:55.890 { 00:19:55.890 "name": "nvmf_tgt_poll_group_001", 00:19:55.890 "admin_qpairs": 0, 00:19:55.890 "io_qpairs": 1, 00:19:55.890 "current_admin_qpairs": 0, 00:19:55.890 "current_io_qpairs": 1, 00:19:55.890 "pending_bdev_io": 0, 00:19:55.890 "completed_nvme_io": 19927, 00:19:55.890 "transports": [ 00:19:55.890 { 00:19:55.890 "trtype": "TCP" 00:19:55.890 } 00:19:55.890 ] 00:19:55.890 }, 00:19:55.890 { 00:19:55.890 "name": "nvmf_tgt_poll_group_002", 00:19:55.890 "admin_qpairs": 0, 00:19:55.890 "io_qpairs": 1, 00:19:55.890 "current_admin_qpairs": 0, 00:19:55.890 "current_io_qpairs": 1, 00:19:55.890 "pending_bdev_io": 0, 00:19:55.890 "completed_nvme_io": 20346, 00:19:55.890 "transports": [ 00:19:55.890 { 00:19:55.890 "trtype": "TCP" 00:19:55.890 } 00:19:55.890 ] 00:19:55.890 }, 00:19:55.890 { 00:19:55.890 "name": "nvmf_tgt_poll_group_003", 00:19:55.890 "admin_qpairs": 0, 00:19:55.890 "io_qpairs": 1, 00:19:55.890 "current_admin_qpairs": 0, 00:19:55.891 "current_io_qpairs": 1, 00:19:55.891 "pending_bdev_io": 0, 00:19:55.891 "completed_nvme_io": 19464, 00:19:55.891 "transports": [ 00:19:55.891 { 00:19:55.891 "trtype": "TCP" 00:19:55.891 } 00:19:55.891 ] 00:19:55.891 } 00:19:55.891 ] 00:19:55.891 }' 00:19:55.891 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:55.891 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:56.149 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:56.149 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:56.149 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2431927 00:20:04.268 Initializing NVMe Controllers 00:20:04.268 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:04.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:04.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:04.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:04.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:04.268 Initialization complete. Launching workers. 00:20:04.268 ======================================================== 00:20:04.268 Latency(us) 00:20:04.268 Device Information : IOPS MiB/s Average min max 00:20:04.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10326.60 40.34 6198.21 2405.18 10682.15 00:20:04.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10558.60 41.24 6062.06 2507.69 9954.42 00:20:04.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10758.70 42.03 5950.65 2575.52 9927.15 00:20:04.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10055.30 39.28 6366.63 2477.98 10648.06 00:20:04.269 ======================================================== 00:20:04.269 Total : 41699.20 162.89 6140.47 2405.18 10682.15 00:20:04.269 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:04.269 rmmod nvme_tcp 00:20:04.269 rmmod nvme_fabrics 00:20:04.269 rmmod nvme_keyring 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2431780 ']' 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2431780 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2431780 ']' 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2431780 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2431780 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2431780' 00:20:04.269 killing process with pid 2431780 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2431780 00:20:04.269 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2431780 00:20:04.528 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:04.528 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:04.528 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:04.528 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:04.528 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:04.528 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:04.528 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:04.528 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:04.528 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:04.528 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.528 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.528 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.434 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:06.434 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:06.434 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:06.434 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:07.369 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:09.904 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:15.191 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:15.191 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:15.191 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:15.191 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:15.191 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:15.192 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:15.192 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:15.192 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:15.192 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:15.192 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:15.192 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.192 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:15.192 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:15.192 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:15.192 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:15.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:20:15.192 00:20:15.192 --- 10.0.0.2 ping statistics --- 00:20:15.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.192 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:15.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:20:15.192 00:20:15.192 --- 10.0.0.1 ping statistics --- 00:20:15.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.192 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:15.192 net.core.busy_poll = 1 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:15.192 net.core.busy_read = 1 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2434557 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2434557 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2434557 ']' 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.192 [2024-12-10 04:08:09.311379] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:15.192 [2024-12-10 04:08:09.311477] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.192 [2024-12-10 04:08:09.385145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:15.192 [2024-12-10 04:08:09.442739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.192 [2024-12-10 04:08:09.442796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.192 [2024-12-10 04:08:09.442809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.192 [2024-12-10 04:08:09.442821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.192 [2024-12-10 04:08:09.442830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.192 [2024-12-10 04:08:09.444294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.192 [2024-12-10 04:08:09.444360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.192 [2024-12-10 04:08:09.444424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:15.192 [2024-12-10 04:08:09.444427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.192 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.451 [2024-12-10 04:08:09.708370] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.451 Malloc1 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.451 [2024-12-10 04:08:09.765372] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2434598 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:15.451 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:17.981 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:17.981 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.981 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.981 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.981 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:17.981 "tick_rate": 2700000000, 00:20:17.981 "poll_groups": [ 00:20:17.981 { 00:20:17.981 "name": "nvmf_tgt_poll_group_000", 00:20:17.981 "admin_qpairs": 1, 00:20:17.981 "io_qpairs": 3, 00:20:17.981 "current_admin_qpairs": 1, 00:20:17.981 "current_io_qpairs": 3, 00:20:17.981 "pending_bdev_io": 0, 00:20:17.981 "completed_nvme_io": 25638, 00:20:17.981 "transports": [ 00:20:17.981 { 00:20:17.981 "trtype": "TCP" 00:20:17.981 } 00:20:17.981 ] 00:20:17.981 }, 00:20:17.981 { 00:20:17.981 "name": "nvmf_tgt_poll_group_001", 00:20:17.981 "admin_qpairs": 0, 00:20:17.981 "io_qpairs": 1, 00:20:17.981 "current_admin_qpairs": 0, 00:20:17.981 "current_io_qpairs": 1, 00:20:17.981 "pending_bdev_io": 0, 00:20:17.981 "completed_nvme_io": 25273, 00:20:17.981 "transports": [ 00:20:17.981 { 00:20:17.981 "trtype": "TCP" 00:20:17.981 } 00:20:17.981 ] 00:20:17.981 }, 00:20:17.981 { 00:20:17.981 "name": "nvmf_tgt_poll_group_002", 00:20:17.981 "admin_qpairs": 0, 00:20:17.981 "io_qpairs": 0, 00:20:17.981 "current_admin_qpairs": 0, 00:20:17.981 "current_io_qpairs": 0, 00:20:17.981 "pending_bdev_io": 0, 00:20:17.981 "completed_nvme_io": 0, 00:20:17.981 "transports": [ 00:20:17.981 { 00:20:17.981 "trtype": "TCP" 00:20:17.981 } 00:20:17.981 ] 00:20:17.981 }, 00:20:17.981 { 00:20:17.981 "name": "nvmf_tgt_poll_group_003", 00:20:17.981 "admin_qpairs": 0, 00:20:17.981 "io_qpairs": 0, 00:20:17.981 "current_admin_qpairs": 0, 00:20:17.981 "current_io_qpairs": 0, 00:20:17.981 "pending_bdev_io": 0, 00:20:17.981 "completed_nvme_io": 0, 00:20:17.981 "transports": [ 00:20:17.981 { 00:20:17.981 "trtype": "TCP" 00:20:17.981 } 00:20:17.981 ] 00:20:17.981 } 00:20:17.981 ] 00:20:17.981 }' 00:20:17.981 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:17.981 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:17.981 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:17.981 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:17.981 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2434598 00:20:26.093 Initializing NVMe Controllers 00:20:26.093 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:26.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:26.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:26.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:26.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:26.093 Initialization complete. Launching workers. 00:20:26.093 ======================================================== 00:20:26.093 Latency(us) 00:20:26.093 Device Information : IOPS MiB/s Average min max 00:20:26.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4234.89 16.54 15118.07 2224.23 62049.24 00:20:26.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4015.99 15.69 15940.31 2001.26 62342.18 00:20:26.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5384.09 21.03 11888.10 1758.21 60848.96 00:20:26.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13571.77 53.01 4715.64 1713.62 7373.08 00:20:26.093 ======================================================== 00:20:26.093 Total : 27206.73 106.28 9411.11 1713.62 62342.18 00:20:26.093 00:20:26.093 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:26.093 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:26.093 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:26.093 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:26.094 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:26.094 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:26.094 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:26.094 rmmod nvme_tcp 00:20:26.094 rmmod nvme_fabrics 00:20:26.094 rmmod nvme_keyring 00:20:26.094 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:26.094 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:26.094 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:26.094 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2434557 ']' 00:20:26.094 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2434557 00:20:26.094 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2434557 ']' 00:20:26.094 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2434557 00:20:26.094 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:26.094 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.094 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2434557 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2434557' 00:20:26.094 killing process with pid 2434557 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2434557 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2434557 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.094 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.003 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:28.003 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:28.003 00:20:28.003 real 0m45.151s 00:20:28.003 user 2m40.997s 00:20:28.003 sys 0m9.072s 00:20:28.003 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.003 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.003 ************************************ 00:20:28.003 END TEST nvmf_perf_adq 00:20:28.003 ************************************ 00:20:28.003 04:08:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:28.003 04:08:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:28.003 04:08:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.003 04:08:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.262 ************************************ 00:20:28.262 START TEST nvmf_shutdown 00:20:28.262 ************************************ 00:20:28.262 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:28.262 * Looking for test storage... 00:20:28.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:28.262 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:28.262 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:28.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.263 --rc genhtml_branch_coverage=1 00:20:28.263 --rc genhtml_function_coverage=1 00:20:28.263 --rc genhtml_legend=1 00:20:28.263 --rc geninfo_all_blocks=1 00:20:28.263 --rc geninfo_unexecuted_blocks=1 00:20:28.263 00:20:28.263 ' 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:28.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.263 --rc genhtml_branch_coverage=1 00:20:28.263 --rc genhtml_function_coverage=1 00:20:28.263 --rc genhtml_legend=1 00:20:28.263 --rc geninfo_all_blocks=1 00:20:28.263 --rc geninfo_unexecuted_blocks=1 00:20:28.263 00:20:28.263 ' 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:28.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.263 --rc genhtml_branch_coverage=1 00:20:28.263 --rc genhtml_function_coverage=1 00:20:28.263 --rc genhtml_legend=1 00:20:28.263 --rc geninfo_all_blocks=1 00:20:28.263 --rc geninfo_unexecuted_blocks=1 00:20:28.263 00:20:28.263 ' 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:28.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.263 --rc genhtml_branch_coverage=1 00:20:28.263 --rc genhtml_function_coverage=1 00:20:28.263 --rc genhtml_legend=1 00:20:28.263 --rc geninfo_all_blocks=1 00:20:28.263 --rc geninfo_unexecuted_blocks=1 00:20:28.263 00:20:28.263 ' 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:28.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.263 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:28.263 ************************************ 00:20:28.263 START TEST nvmf_shutdown_tc1 00:20:28.263 ************************************ 00:20:28.264 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:28.264 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:28.264 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:28.264 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:28.264 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.264 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:28.264 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:28.264 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:28.264 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.264 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.264 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.264 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:28.264 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:28.264 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:28.264 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:30.797 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:30.798 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:30.798 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:30.798 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:30.798 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:30.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:20:30.798 00:20:30.798 --- 10.0.0.2 ping statistics --- 00:20:30.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.798 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:30.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:20:30.798 00:20:30.798 --- 10.0.0.1 ping statistics --- 00:20:30.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.798 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2437870 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2437870 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2437870 ']' 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.798 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.798 [2024-12-10 04:08:24.844937] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:30.798 [2024-12-10 04:08:24.845009] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.799 [2024-12-10 04:08:24.916210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:30.799 [2024-12-10 04:08:24.974383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.799 [2024-12-10 04:08:24.974440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.799 [2024-12-10 04:08:24.974469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.799 [2024-12-10 04:08:24.974480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.799 [2024-12-10 04:08:24.974489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.799 [2024-12-10 04:08:24.976007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.799 [2024-12-10 04:08:24.976072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:30.799 [2024-12-10 04:08:24.976142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:30.799 [2024-12-10 04:08:24.976145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.799 [2024-12-10 04:08:25.112859] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.799 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.799 Malloc1 00:20:31.057 [2024-12-10 04:08:25.196976] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.057 Malloc2 00:20:31.057 Malloc3 00:20:31.057 Malloc4 00:20:31.057 Malloc5 00:20:31.057 Malloc6 00:20:31.315 Malloc7 00:20:31.315 Malloc8 00:20:31.315 Malloc9 00:20:31.315 Malloc10 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2438000 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2438000 /var/tmp/bdevperf.sock 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2438000 ']' 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.315 { 00:20:31.315 "params": { 00:20:31.315 "name": "Nvme$subsystem", 00:20:31.315 "trtype": "$TEST_TRANSPORT", 00:20:31.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.315 "adrfam": "ipv4", 00:20:31.315 "trsvcid": "$NVMF_PORT", 00:20:31.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.315 "hdgst": ${hdgst:-false}, 00:20:31.315 "ddgst": ${ddgst:-false} 00:20:31.315 }, 00:20:31.315 "method": "bdev_nvme_attach_controller" 00:20:31.315 } 00:20:31.315 EOF 00:20:31.315 )") 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.315 { 00:20:31.315 "params": { 00:20:31.315 "name": "Nvme$subsystem", 00:20:31.315 "trtype": "$TEST_TRANSPORT", 00:20:31.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.315 "adrfam": "ipv4", 00:20:31.315 "trsvcid": "$NVMF_PORT", 00:20:31.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.315 "hdgst": ${hdgst:-false}, 00:20:31.315 "ddgst": ${ddgst:-false} 00:20:31.315 }, 00:20:31.315 "method": "bdev_nvme_attach_controller" 00:20:31.315 } 00:20:31.315 EOF 00:20:31.315 )") 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.315 { 00:20:31.315 "params": { 00:20:31.315 "name": "Nvme$subsystem", 00:20:31.315 "trtype": "$TEST_TRANSPORT", 00:20:31.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.315 "adrfam": "ipv4", 00:20:31.315 "trsvcid": "$NVMF_PORT", 00:20:31.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.315 "hdgst": ${hdgst:-false}, 00:20:31.315 "ddgst": ${ddgst:-false} 00:20:31.315 }, 00:20:31.315 "method": "bdev_nvme_attach_controller" 00:20:31.315 } 00:20:31.315 EOF 00:20:31.315 )") 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.315 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.315 { 00:20:31.315 "params": { 00:20:31.316 "name": "Nvme$subsystem", 00:20:31.316 "trtype": "$TEST_TRANSPORT", 00:20:31.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.316 "adrfam": "ipv4", 00:20:31.316 "trsvcid": "$NVMF_PORT", 00:20:31.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.316 "hdgst": ${hdgst:-false}, 00:20:31.316 "ddgst": ${ddgst:-false} 00:20:31.316 }, 00:20:31.316 "method": "bdev_nvme_attach_controller" 00:20:31.316 } 00:20:31.316 EOF 00:20:31.316 )") 00:20:31.316 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.316 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.316 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.316 { 00:20:31.316 "params": { 00:20:31.316 "name": "Nvme$subsystem", 00:20:31.316 "trtype": "$TEST_TRANSPORT", 00:20:31.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.316 "adrfam": "ipv4", 00:20:31.316 "trsvcid": "$NVMF_PORT", 00:20:31.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.316 "hdgst": ${hdgst:-false}, 00:20:31.316 "ddgst": ${ddgst:-false} 00:20:31.316 }, 00:20:31.316 "method": "bdev_nvme_attach_controller" 00:20:31.316 } 00:20:31.316 EOF 00:20:31.316 )") 00:20:31.316 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.316 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.316 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.316 { 00:20:31.316 "params": { 00:20:31.316 "name": "Nvme$subsystem", 00:20:31.316 "trtype": "$TEST_TRANSPORT", 00:20:31.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.316 "adrfam": "ipv4", 00:20:31.316 "trsvcid": "$NVMF_PORT", 00:20:31.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.316 "hdgst": ${hdgst:-false}, 00:20:31.316 "ddgst": ${ddgst:-false} 00:20:31.316 }, 00:20:31.316 "method": "bdev_nvme_attach_controller" 00:20:31.316 } 00:20:31.316 EOF 00:20:31.316 )") 00:20:31.316 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.316 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.316 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.316 { 00:20:31.316 "params": { 00:20:31.316 "name": "Nvme$subsystem", 00:20:31.316 "trtype": "$TEST_TRANSPORT", 00:20:31.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.316 "adrfam": "ipv4", 00:20:31.316 "trsvcid": "$NVMF_PORT", 00:20:31.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.316 "hdgst": ${hdgst:-false}, 00:20:31.316 "ddgst": ${ddgst:-false} 00:20:31.316 }, 00:20:31.316 "method": "bdev_nvme_attach_controller" 00:20:31.316 } 00:20:31.316 EOF 00:20:31.316 )") 00:20:31.316 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.575 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.575 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.575 { 00:20:31.575 "params": { 00:20:31.575 "name": "Nvme$subsystem", 00:20:31.575 "trtype": "$TEST_TRANSPORT", 00:20:31.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.575 "adrfam": "ipv4", 00:20:31.575 "trsvcid": "$NVMF_PORT", 00:20:31.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.575 "hdgst": ${hdgst:-false}, 00:20:31.575 "ddgst": ${ddgst:-false} 00:20:31.575 }, 00:20:31.575 "method": "bdev_nvme_attach_controller" 00:20:31.575 } 00:20:31.575 EOF 00:20:31.575 )") 00:20:31.575 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.575 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.575 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.575 { 00:20:31.575 "params": { 00:20:31.575 "name": "Nvme$subsystem", 00:20:31.575 "trtype": "$TEST_TRANSPORT", 00:20:31.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.575 "adrfam": "ipv4", 00:20:31.575 "trsvcid": "$NVMF_PORT", 00:20:31.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.575 "hdgst": ${hdgst:-false}, 00:20:31.575 "ddgst": ${ddgst:-false} 00:20:31.575 }, 00:20:31.575 "method": "bdev_nvme_attach_controller" 00:20:31.575 } 00:20:31.575 EOF 00:20:31.575 )") 00:20:31.575 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.575 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.575 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.575 { 00:20:31.575 "params": { 00:20:31.575 "name": "Nvme$subsystem", 00:20:31.575 "trtype": "$TEST_TRANSPORT", 00:20:31.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.575 "adrfam": "ipv4", 00:20:31.575 "trsvcid": "$NVMF_PORT", 00:20:31.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.575 "hdgst": ${hdgst:-false}, 00:20:31.575 "ddgst": ${ddgst:-false} 00:20:31.575 }, 00:20:31.575 "method": "bdev_nvme_attach_controller" 00:20:31.575 } 00:20:31.575 EOF 00:20:31.575 )") 00:20:31.575 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.575 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:31.575 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:31.575 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:31.575 "params": { 00:20:31.575 "name": "Nvme1", 00:20:31.575 "trtype": "tcp", 00:20:31.575 "traddr": "10.0.0.2", 00:20:31.575 "adrfam": "ipv4", 00:20:31.575 "trsvcid": "4420", 00:20:31.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.575 "hdgst": false, 00:20:31.575 "ddgst": false 00:20:31.575 }, 00:20:31.575 "method": "bdev_nvme_attach_controller" 00:20:31.575 },{ 00:20:31.575 "params": { 00:20:31.575 "name": "Nvme2", 00:20:31.575 "trtype": "tcp", 00:20:31.575 "traddr": "10.0.0.2", 00:20:31.575 "adrfam": "ipv4", 00:20:31.575 "trsvcid": "4420", 00:20:31.575 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:31.575 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:31.575 "hdgst": false, 00:20:31.575 "ddgst": false 00:20:31.575 }, 00:20:31.575 "method": "bdev_nvme_attach_controller" 00:20:31.575 },{ 00:20:31.575 "params": { 00:20:31.575 "name": "Nvme3", 00:20:31.575 "trtype": "tcp", 00:20:31.575 "traddr": "10.0.0.2", 00:20:31.575 "adrfam": "ipv4", 00:20:31.575 "trsvcid": "4420", 00:20:31.575 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:31.575 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:31.575 "hdgst": false, 00:20:31.575 "ddgst": false 00:20:31.575 }, 00:20:31.575 "method": "bdev_nvme_attach_controller" 00:20:31.575 },{ 00:20:31.575 "params": { 00:20:31.575 "name": "Nvme4", 00:20:31.575 "trtype": "tcp", 00:20:31.575 "traddr": "10.0.0.2", 00:20:31.575 "adrfam": "ipv4", 00:20:31.575 "trsvcid": "4420", 00:20:31.575 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:31.575 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:31.575 "hdgst": false, 00:20:31.575 "ddgst": false 00:20:31.575 }, 00:20:31.575 "method": "bdev_nvme_attach_controller" 00:20:31.575 },{ 00:20:31.575 "params": { 00:20:31.575 "name": "Nvme5", 00:20:31.575 "trtype": "tcp", 00:20:31.575 "traddr": "10.0.0.2", 00:20:31.575 "adrfam": "ipv4", 00:20:31.575 "trsvcid": "4420", 00:20:31.575 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:31.575 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:31.575 "hdgst": false, 00:20:31.575 "ddgst": false 00:20:31.575 }, 00:20:31.575 "method": "bdev_nvme_attach_controller" 00:20:31.575 },{ 00:20:31.575 "params": { 00:20:31.575 "name": "Nvme6", 00:20:31.575 "trtype": "tcp", 00:20:31.575 "traddr": "10.0.0.2", 00:20:31.575 "adrfam": "ipv4", 00:20:31.575 "trsvcid": "4420", 00:20:31.575 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:31.575 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:31.575 "hdgst": false, 00:20:31.575 "ddgst": false 00:20:31.575 }, 00:20:31.575 "method": "bdev_nvme_attach_controller" 00:20:31.575 },{ 00:20:31.575 "params": { 00:20:31.575 "name": "Nvme7", 00:20:31.575 "trtype": "tcp", 00:20:31.575 "traddr": "10.0.0.2", 00:20:31.575 "adrfam": "ipv4", 00:20:31.575 "trsvcid": "4420", 00:20:31.575 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:31.575 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:31.575 "hdgst": false, 00:20:31.575 "ddgst": false 00:20:31.575 }, 00:20:31.575 "method": "bdev_nvme_attach_controller" 00:20:31.575 },{ 00:20:31.575 "params": { 00:20:31.575 "name": "Nvme8", 00:20:31.575 "trtype": "tcp", 00:20:31.575 "traddr": "10.0.0.2", 00:20:31.575 "adrfam": "ipv4", 00:20:31.575 "trsvcid": "4420", 00:20:31.575 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:31.575 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:31.575 "hdgst": false, 00:20:31.575 "ddgst": false 00:20:31.575 }, 00:20:31.575 "method": "bdev_nvme_attach_controller" 00:20:31.575 },{ 00:20:31.575 "params": { 00:20:31.575 "name": "Nvme9", 00:20:31.575 "trtype": "tcp", 00:20:31.575 "traddr": "10.0.0.2", 00:20:31.575 "adrfam": "ipv4", 00:20:31.575 "trsvcid": "4420", 00:20:31.575 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:31.575 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:31.575 "hdgst": false, 00:20:31.575 "ddgst": false 00:20:31.575 }, 00:20:31.575 "method": "bdev_nvme_attach_controller" 00:20:31.575 },{ 00:20:31.575 "params": { 00:20:31.575 "name": "Nvme10", 00:20:31.575 "trtype": "tcp", 00:20:31.575 "traddr": "10.0.0.2", 00:20:31.575 "adrfam": "ipv4", 00:20:31.575 "trsvcid": "4420", 00:20:31.575 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:31.575 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:31.575 "hdgst": false, 00:20:31.575 "ddgst": false 00:20:31.575 }, 00:20:31.575 "method": "bdev_nvme_attach_controller" 00:20:31.575 }' 00:20:31.575 [2024-12-10 04:08:25.719377] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:31.576 [2024-12-10 04:08:25.719454] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:31.576 [2024-12-10 04:08:25.791810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.576 [2024-12-10 04:08:25.850980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.474 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.474 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:33.474 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:33.474 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.474 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:33.474 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.474 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2438000 00:20:33.474 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:33.474 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:34.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2438000 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2437870 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.479 { 00:20:34.479 "params": { 00:20:34.479 "name": "Nvme$subsystem", 00:20:34.479 "trtype": "$TEST_TRANSPORT", 00:20:34.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.479 "adrfam": "ipv4", 00:20:34.479 "trsvcid": "$NVMF_PORT", 00:20:34.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.479 "hdgst": ${hdgst:-false}, 00:20:34.479 "ddgst": ${ddgst:-false} 00:20:34.479 }, 00:20:34.479 "method": "bdev_nvme_attach_controller" 00:20:34.479 } 00:20:34.479 EOF 00:20:34.479 )") 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.479 { 00:20:34.479 "params": { 00:20:34.479 "name": "Nvme$subsystem", 00:20:34.479 "trtype": "$TEST_TRANSPORT", 00:20:34.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.479 "adrfam": "ipv4", 00:20:34.479 "trsvcid": "$NVMF_PORT", 00:20:34.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.479 "hdgst": ${hdgst:-false}, 00:20:34.479 "ddgst": ${ddgst:-false} 00:20:34.479 }, 00:20:34.479 "method": "bdev_nvme_attach_controller" 00:20:34.479 } 00:20:34.479 EOF 00:20:34.479 )") 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.479 { 00:20:34.479 "params": { 00:20:34.479 "name": "Nvme$subsystem", 00:20:34.479 "trtype": "$TEST_TRANSPORT", 00:20:34.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.479 "adrfam": "ipv4", 00:20:34.479 "trsvcid": "$NVMF_PORT", 00:20:34.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.479 "hdgst": ${hdgst:-false}, 00:20:34.479 "ddgst": ${ddgst:-false} 00:20:34.479 }, 00:20:34.479 "method": "bdev_nvme_attach_controller" 00:20:34.479 } 00:20:34.479 EOF 00:20:34.479 )") 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.479 { 00:20:34.479 "params": { 00:20:34.479 "name": "Nvme$subsystem", 00:20:34.479 "trtype": "$TEST_TRANSPORT", 00:20:34.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.479 "adrfam": "ipv4", 00:20:34.479 "trsvcid": "$NVMF_PORT", 00:20:34.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.479 "hdgst": ${hdgst:-false}, 00:20:34.479 "ddgst": ${ddgst:-false} 00:20:34.479 }, 00:20:34.479 "method": "bdev_nvme_attach_controller" 00:20:34.479 } 00:20:34.479 EOF 00:20:34.479 )") 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.479 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.479 { 00:20:34.479 "params": { 00:20:34.480 "name": "Nvme$subsystem", 00:20:34.480 "trtype": "$TEST_TRANSPORT", 00:20:34.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.480 "adrfam": "ipv4", 00:20:34.480 "trsvcid": "$NVMF_PORT", 00:20:34.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.480 "hdgst": ${hdgst:-false}, 00:20:34.480 "ddgst": ${ddgst:-false} 00:20:34.480 }, 00:20:34.480 "method": "bdev_nvme_attach_controller" 00:20:34.480 } 00:20:34.480 EOF 00:20:34.480 )") 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.480 { 00:20:34.480 "params": { 00:20:34.480 "name": "Nvme$subsystem", 00:20:34.480 "trtype": "$TEST_TRANSPORT", 00:20:34.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.480 "adrfam": "ipv4", 00:20:34.480 "trsvcid": "$NVMF_PORT", 00:20:34.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.480 "hdgst": ${hdgst:-false}, 00:20:34.480 "ddgst": ${ddgst:-false} 00:20:34.480 }, 00:20:34.480 "method": "bdev_nvme_attach_controller" 00:20:34.480 } 00:20:34.480 EOF 00:20:34.480 )") 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.480 { 00:20:34.480 "params": { 00:20:34.480 "name": "Nvme$subsystem", 00:20:34.480 "trtype": "$TEST_TRANSPORT", 00:20:34.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.480 "adrfam": "ipv4", 00:20:34.480 "trsvcid": "$NVMF_PORT", 00:20:34.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.480 "hdgst": ${hdgst:-false}, 00:20:34.480 "ddgst": ${ddgst:-false} 00:20:34.480 }, 00:20:34.480 "method": "bdev_nvme_attach_controller" 00:20:34.480 } 00:20:34.480 EOF 00:20:34.480 )") 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.480 { 00:20:34.480 "params": { 00:20:34.480 "name": "Nvme$subsystem", 00:20:34.480 "trtype": "$TEST_TRANSPORT", 00:20:34.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.480 "adrfam": "ipv4", 00:20:34.480 "trsvcid": "$NVMF_PORT", 00:20:34.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.480 "hdgst": ${hdgst:-false}, 00:20:34.480 "ddgst": ${ddgst:-false} 00:20:34.480 }, 00:20:34.480 "method": "bdev_nvme_attach_controller" 00:20:34.480 } 00:20:34.480 EOF 00:20:34.480 )") 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.480 { 00:20:34.480 "params": { 00:20:34.480 "name": "Nvme$subsystem", 00:20:34.480 "trtype": "$TEST_TRANSPORT", 00:20:34.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.480 "adrfam": "ipv4", 00:20:34.480 "trsvcid": "$NVMF_PORT", 00:20:34.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.480 "hdgst": ${hdgst:-false}, 00:20:34.480 "ddgst": ${ddgst:-false} 00:20:34.480 }, 00:20:34.480 "method": "bdev_nvme_attach_controller" 00:20:34.480 } 00:20:34.480 EOF 00:20:34.480 )") 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.480 { 00:20:34.480 "params": { 00:20:34.480 "name": "Nvme$subsystem", 00:20:34.480 "trtype": "$TEST_TRANSPORT", 00:20:34.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.480 "adrfam": "ipv4", 00:20:34.480 "trsvcid": "$NVMF_PORT", 00:20:34.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.480 "hdgst": ${hdgst:-false}, 00:20:34.480 "ddgst": ${ddgst:-false} 00:20:34.480 }, 00:20:34.480 "method": "bdev_nvme_attach_controller" 00:20:34.480 } 00:20:34.480 EOF 00:20:34.480 )") 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:34.480 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:34.480 "params": { 00:20:34.480 "name": "Nvme1", 00:20:34.480 "trtype": "tcp", 00:20:34.480 "traddr": "10.0.0.2", 00:20:34.480 "adrfam": "ipv4", 00:20:34.480 "trsvcid": "4420", 00:20:34.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.480 "hdgst": false, 00:20:34.480 "ddgst": false 00:20:34.480 }, 00:20:34.480 "method": "bdev_nvme_attach_controller" 00:20:34.480 },{ 00:20:34.480 "params": { 00:20:34.480 "name": "Nvme2", 00:20:34.480 "trtype": "tcp", 00:20:34.480 "traddr": "10.0.0.2", 00:20:34.480 "adrfam": "ipv4", 00:20:34.480 "trsvcid": "4420", 00:20:34.480 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:34.480 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:34.480 "hdgst": false, 00:20:34.480 "ddgst": false 00:20:34.480 }, 00:20:34.480 "method": "bdev_nvme_attach_controller" 00:20:34.480 },{ 00:20:34.480 "params": { 00:20:34.480 "name": "Nvme3", 00:20:34.480 "trtype": "tcp", 00:20:34.480 "traddr": "10.0.0.2", 00:20:34.480 "adrfam": "ipv4", 00:20:34.480 "trsvcid": "4420", 00:20:34.480 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:34.480 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:34.480 "hdgst": false, 00:20:34.480 "ddgst": false 00:20:34.480 }, 00:20:34.480 "method": "bdev_nvme_attach_controller" 00:20:34.480 },{ 00:20:34.480 "params": { 00:20:34.480 "name": "Nvme4", 00:20:34.480 "trtype": "tcp", 00:20:34.480 "traddr": "10.0.0.2", 00:20:34.480 "adrfam": "ipv4", 00:20:34.480 "trsvcid": "4420", 00:20:34.480 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:34.480 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:34.480 "hdgst": false, 00:20:34.480 "ddgst": false 00:20:34.480 }, 00:20:34.480 "method": "bdev_nvme_attach_controller" 00:20:34.480 },{ 00:20:34.480 "params": { 00:20:34.480 "name": "Nvme5", 00:20:34.480 "trtype": "tcp", 00:20:34.480 "traddr": "10.0.0.2", 00:20:34.480 "adrfam": "ipv4", 00:20:34.480 "trsvcid": "4420", 00:20:34.480 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:34.480 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:34.480 "hdgst": false, 00:20:34.480 "ddgst": false 00:20:34.480 }, 00:20:34.480 "method": "bdev_nvme_attach_controller" 00:20:34.480 },{ 00:20:34.480 "params": { 00:20:34.480 "name": "Nvme6", 00:20:34.480 "trtype": "tcp", 00:20:34.480 "traddr": "10.0.0.2", 00:20:34.480 "adrfam": "ipv4", 00:20:34.480 "trsvcid": "4420", 00:20:34.480 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:34.480 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:34.480 "hdgst": false, 00:20:34.480 "ddgst": false 00:20:34.480 }, 00:20:34.480 "method": "bdev_nvme_attach_controller" 00:20:34.480 },{ 00:20:34.480 "params": { 00:20:34.480 "name": "Nvme7", 00:20:34.480 "trtype": "tcp", 00:20:34.480 "traddr": "10.0.0.2", 00:20:34.480 "adrfam": "ipv4", 00:20:34.480 "trsvcid": "4420", 00:20:34.480 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:34.480 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:34.480 "hdgst": false, 00:20:34.480 "ddgst": false 00:20:34.480 }, 00:20:34.480 "method": "bdev_nvme_attach_controller" 00:20:34.480 },{ 00:20:34.480 "params": { 00:20:34.480 "name": "Nvme8", 00:20:34.480 "trtype": "tcp", 00:20:34.480 "traddr": "10.0.0.2", 00:20:34.480 "adrfam": "ipv4", 00:20:34.480 "trsvcid": "4420", 00:20:34.480 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:34.480 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:34.480 "hdgst": false, 00:20:34.480 "ddgst": false 00:20:34.480 }, 00:20:34.480 "method": "bdev_nvme_attach_controller" 00:20:34.480 },{ 00:20:34.480 "params": { 00:20:34.480 "name": "Nvme9", 00:20:34.480 "trtype": "tcp", 00:20:34.480 "traddr": "10.0.0.2", 00:20:34.480 "adrfam": "ipv4", 00:20:34.480 "trsvcid": "4420", 00:20:34.480 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:34.480 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:34.480 "hdgst": false, 00:20:34.480 "ddgst": false 00:20:34.480 }, 00:20:34.480 "method": "bdev_nvme_attach_controller" 00:20:34.480 },{ 00:20:34.480 "params": { 00:20:34.480 "name": "Nvme10", 00:20:34.480 "trtype": "tcp", 00:20:34.480 "traddr": "10.0.0.2", 00:20:34.480 "adrfam": "ipv4", 00:20:34.480 "trsvcid": "4420", 00:20:34.480 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:34.481 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:34.481 "hdgst": false, 00:20:34.481 "ddgst": false 00:20:34.481 }, 00:20:34.481 "method": "bdev_nvme_attach_controller" 00:20:34.481 }' 00:20:34.481 [2024-12-10 04:08:28.776359] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:34.481 [2024-12-10 04:08:28.776436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2438359 ] 00:20:34.481 [2024-12-10 04:08:28.848933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.738 [2024-12-10 04:08:28.910740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.110 Running I/O for 1 seconds... 00:20:37.302 1669.00 IOPS, 104.31 MiB/s 00:20:37.302 Latency(us) 00:20:37.302 [2024-12-10T03:08:31.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.302 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.302 Verification LBA range: start 0x0 length 0x400 00:20:37.302 Nvme1n1 : 1.14 224.49 14.03 0.00 0.00 278453.10 22233.69 250104.79 00:20:37.302 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.302 Verification LBA range: start 0x0 length 0x400 00:20:37.302 Nvme2n1 : 1.14 227.95 14.25 0.00 0.00 269116.65 18155.90 233016.89 00:20:37.302 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.302 Verification LBA range: start 0x0 length 0x400 00:20:37.302 Nvme3n1 : 1.13 226.58 14.16 0.00 0.00 265305.13 30680.56 240784.12 00:20:37.302 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.302 Verification LBA range: start 0x0 length 0x400 00:20:37.302 Nvme4n1 : 1.15 221.80 13.86 0.00 0.00 268072.39 17864.63 253211.69 00:20:37.302 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.302 Verification LBA range: start 0x0 length 0x400 00:20:37.302 Nvme5n1 : 1.18 217.69 13.61 0.00 0.00 267789.84 24660.95 270299.59 00:20:37.302 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.302 Verification LBA range: start 0x0 length 0x400 00:20:37.302 Nvme6n1 : 1.17 219.51 13.72 0.00 0.00 259543.99 20486.07 267192.70 00:20:37.302 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.302 Verification LBA range: start 0x0 length 0x400 00:20:37.302 Nvme7n1 : 1.16 220.15 13.76 0.00 0.00 253061.88 17282.09 259425.47 00:20:37.302 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.302 Verification LBA range: start 0x0 length 0x400 00:20:37.302 Nvme8n1 : 1.18 217.53 13.60 0.00 0.00 250272.24 20777.34 268746.15 00:20:37.302 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.302 Verification LBA range: start 0x0 length 0x400 00:20:37.302 Nvme9n1 : 1.24 258.98 16.19 0.00 0.00 207687.76 7864.32 284280.60 00:20:37.302 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.302 Verification LBA range: start 0x0 length 0x400 00:20:37.302 Nvme10n1 : 1.24 257.26 16.08 0.00 0.00 205078.49 5437.06 262532.36 00:20:37.302 [2024-12-10T03:08:31.691Z] =================================================================================================================== 00:20:37.302 [2024-12-10T03:08:31.691Z] Total : 2291.93 143.25 0.00 0.00 250280.09 5437.06 284280.60 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:37.561 rmmod nvme_tcp 00:20:37.561 rmmod nvme_fabrics 00:20:37.561 rmmod nvme_keyring 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2437870 ']' 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2437870 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2437870 ']' 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2437870 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2437870 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2437870' 00:20:37.561 killing process with pid 2437870 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2437870 00:20:37.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2437870 00:20:38.128 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:38.128 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:38.128 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:38.128 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:38.128 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:38.128 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:38.128 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:38.128 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.128 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:38.128 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.128 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.128 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:40.664 00:20:40.664 real 0m11.854s 00:20:40.664 user 0m34.353s 00:20:40.664 sys 0m3.243s 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:40.664 ************************************ 00:20:40.664 END TEST nvmf_shutdown_tc1 00:20:40.664 ************************************ 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:40.664 ************************************ 00:20:40.664 START TEST nvmf_shutdown_tc2 00:20:40.664 ************************************ 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:40.664 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:40.664 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:40.664 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:40.664 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:40.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:20:40.664 00:20:40.664 --- 10.0.0.2 ping statistics --- 00:20:40.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.664 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:40.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:20:40.664 00:20:40.664 --- 10.0.0.1 ping statistics --- 00:20:40.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.664 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2439175 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2439175 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2439175 ']' 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.664 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.664 [2024-12-10 04:08:34.740524] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:40.664 [2024-12-10 04:08:34.740627] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.664 [2024-12-10 04:08:34.814381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.664 [2024-12-10 04:08:34.870777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.664 [2024-12-10 04:08:34.870854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.664 [2024-12-10 04:08:34.870877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.664 [2024-12-10 04:08:34.870895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.664 [2024-12-10 04:08:34.870911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.664 [2024-12-10 04:08:34.872394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.665 [2024-12-10 04:08:34.872500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.665 [2024-12-10 04:08:34.872594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:40.665 [2024-12-10 04:08:34.872599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.665 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.665 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:40.665 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:40.665 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.665 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.665 [2024-12-10 04:08:35.012011] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.665 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.922 Malloc1 00:20:40.922 [2024-12-10 04:08:35.106660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.922 Malloc2 00:20:40.922 Malloc3 00:20:40.922 Malloc4 00:20:40.922 Malloc5 00:20:41.179 Malloc6 00:20:41.179 Malloc7 00:20:41.179 Malloc8 00:20:41.179 Malloc9 00:20:41.179 Malloc10 00:20:41.179 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.179 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:41.179 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:41.179 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2439315 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2439315 /var/tmp/bdevperf.sock 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2439315 ']' 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.437 { 00:20:41.437 "params": { 00:20:41.437 "name": "Nvme$subsystem", 00:20:41.437 "trtype": "$TEST_TRANSPORT", 00:20:41.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.437 "adrfam": "ipv4", 00:20:41.437 "trsvcid": "$NVMF_PORT", 00:20:41.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.437 "hdgst": ${hdgst:-false}, 00:20:41.437 "ddgst": ${ddgst:-false} 00:20:41.437 }, 00:20:41.437 "method": "bdev_nvme_attach_controller" 00:20:41.437 } 00:20:41.437 EOF 00:20:41.437 )") 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.437 { 00:20:41.437 "params": { 00:20:41.437 "name": "Nvme$subsystem", 00:20:41.437 "trtype": "$TEST_TRANSPORT", 00:20:41.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.437 "adrfam": "ipv4", 00:20:41.437 "trsvcid": "$NVMF_PORT", 00:20:41.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.437 "hdgst": ${hdgst:-false}, 00:20:41.437 "ddgst": ${ddgst:-false} 00:20:41.437 }, 00:20:41.437 "method": "bdev_nvme_attach_controller" 00:20:41.437 } 00:20:41.437 EOF 00:20:41.437 )") 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.437 { 00:20:41.437 "params": { 00:20:41.437 "name": "Nvme$subsystem", 00:20:41.437 "trtype": "$TEST_TRANSPORT", 00:20:41.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.437 "adrfam": "ipv4", 00:20:41.437 "trsvcid": "$NVMF_PORT", 00:20:41.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.437 "hdgst": ${hdgst:-false}, 00:20:41.437 "ddgst": ${ddgst:-false} 00:20:41.437 }, 00:20:41.437 "method": "bdev_nvme_attach_controller" 00:20:41.437 } 00:20:41.437 EOF 00:20:41.437 )") 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.437 { 00:20:41.437 "params": { 00:20:41.437 "name": "Nvme$subsystem", 00:20:41.437 "trtype": "$TEST_TRANSPORT", 00:20:41.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.437 "adrfam": "ipv4", 00:20:41.437 "trsvcid": "$NVMF_PORT", 00:20:41.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.437 "hdgst": ${hdgst:-false}, 00:20:41.437 "ddgst": ${ddgst:-false} 00:20:41.437 }, 00:20:41.437 "method": "bdev_nvme_attach_controller" 00:20:41.437 } 00:20:41.437 EOF 00:20:41.437 )") 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.437 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.437 { 00:20:41.437 "params": { 00:20:41.437 "name": "Nvme$subsystem", 00:20:41.437 "trtype": "$TEST_TRANSPORT", 00:20:41.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.437 "adrfam": "ipv4", 00:20:41.437 "trsvcid": "$NVMF_PORT", 00:20:41.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.437 "hdgst": ${hdgst:-false}, 00:20:41.437 "ddgst": ${ddgst:-false} 00:20:41.437 }, 00:20:41.438 "method": "bdev_nvme_attach_controller" 00:20:41.438 } 00:20:41.438 EOF 00:20:41.438 )") 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.438 { 00:20:41.438 "params": { 00:20:41.438 "name": "Nvme$subsystem", 00:20:41.438 "trtype": "$TEST_TRANSPORT", 00:20:41.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.438 "adrfam": "ipv4", 00:20:41.438 "trsvcid": "$NVMF_PORT", 00:20:41.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.438 "hdgst": ${hdgst:-false}, 00:20:41.438 "ddgst": ${ddgst:-false} 00:20:41.438 }, 00:20:41.438 "method": "bdev_nvme_attach_controller" 00:20:41.438 } 00:20:41.438 EOF 00:20:41.438 )") 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.438 { 00:20:41.438 "params": { 00:20:41.438 "name": "Nvme$subsystem", 00:20:41.438 "trtype": "$TEST_TRANSPORT", 00:20:41.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.438 "adrfam": "ipv4", 00:20:41.438 "trsvcid": "$NVMF_PORT", 00:20:41.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.438 "hdgst": ${hdgst:-false}, 00:20:41.438 "ddgst": ${ddgst:-false} 00:20:41.438 }, 00:20:41.438 "method": "bdev_nvme_attach_controller" 00:20:41.438 } 00:20:41.438 EOF 00:20:41.438 )") 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.438 { 00:20:41.438 "params": { 00:20:41.438 "name": "Nvme$subsystem", 00:20:41.438 "trtype": "$TEST_TRANSPORT", 00:20:41.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.438 "adrfam": "ipv4", 00:20:41.438 "trsvcid": "$NVMF_PORT", 00:20:41.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.438 "hdgst": ${hdgst:-false}, 00:20:41.438 "ddgst": ${ddgst:-false} 00:20:41.438 }, 00:20:41.438 "method": "bdev_nvme_attach_controller" 00:20:41.438 } 00:20:41.438 EOF 00:20:41.438 )") 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.438 { 00:20:41.438 "params": { 00:20:41.438 "name": "Nvme$subsystem", 00:20:41.438 "trtype": "$TEST_TRANSPORT", 00:20:41.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.438 "adrfam": "ipv4", 00:20:41.438 "trsvcid": "$NVMF_PORT", 00:20:41.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.438 "hdgst": ${hdgst:-false}, 00:20:41.438 "ddgst": ${ddgst:-false} 00:20:41.438 }, 00:20:41.438 "method": "bdev_nvme_attach_controller" 00:20:41.438 } 00:20:41.438 EOF 00:20:41.438 )") 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.438 { 00:20:41.438 "params": { 00:20:41.438 "name": "Nvme$subsystem", 00:20:41.438 "trtype": "$TEST_TRANSPORT", 00:20:41.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.438 "adrfam": "ipv4", 00:20:41.438 "trsvcid": "$NVMF_PORT", 00:20:41.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.438 "hdgst": ${hdgst:-false}, 00:20:41.438 "ddgst": ${ddgst:-false} 00:20:41.438 }, 00:20:41.438 "method": "bdev_nvme_attach_controller" 00:20:41.438 } 00:20:41.438 EOF 00:20:41.438 )") 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:41.438 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:41.438 "params": { 00:20:41.438 "name": "Nvme1", 00:20:41.438 "trtype": "tcp", 00:20:41.438 "traddr": "10.0.0.2", 00:20:41.438 "adrfam": "ipv4", 00:20:41.438 "trsvcid": "4420", 00:20:41.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.438 "hdgst": false, 00:20:41.438 "ddgst": false 00:20:41.438 }, 00:20:41.438 "method": "bdev_nvme_attach_controller" 00:20:41.438 },{ 00:20:41.438 "params": { 00:20:41.438 "name": "Nvme2", 00:20:41.438 "trtype": "tcp", 00:20:41.438 "traddr": "10.0.0.2", 00:20:41.438 "adrfam": "ipv4", 00:20:41.438 "trsvcid": "4420", 00:20:41.438 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:41.438 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:41.438 "hdgst": false, 00:20:41.438 "ddgst": false 00:20:41.438 }, 00:20:41.438 "method": "bdev_nvme_attach_controller" 00:20:41.438 },{ 00:20:41.438 "params": { 00:20:41.438 "name": "Nvme3", 00:20:41.438 "trtype": "tcp", 00:20:41.438 "traddr": "10.0.0.2", 00:20:41.438 "adrfam": "ipv4", 00:20:41.438 "trsvcid": "4420", 00:20:41.438 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:41.438 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:41.438 "hdgst": false, 00:20:41.438 "ddgst": false 00:20:41.438 }, 00:20:41.438 "method": "bdev_nvme_attach_controller" 00:20:41.438 },{ 00:20:41.438 "params": { 00:20:41.438 "name": "Nvme4", 00:20:41.438 "trtype": "tcp", 00:20:41.438 "traddr": "10.0.0.2", 00:20:41.438 "adrfam": "ipv4", 00:20:41.438 "trsvcid": "4420", 00:20:41.438 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:41.438 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:41.438 "hdgst": false, 00:20:41.438 "ddgst": false 00:20:41.438 }, 00:20:41.438 "method": "bdev_nvme_attach_controller" 00:20:41.438 },{ 00:20:41.438 "params": { 00:20:41.438 "name": "Nvme5", 00:20:41.438 "trtype": "tcp", 00:20:41.438 "traddr": "10.0.0.2", 00:20:41.438 "adrfam": "ipv4", 00:20:41.438 "trsvcid": "4420", 00:20:41.438 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:41.438 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:41.438 "hdgst": false, 00:20:41.438 "ddgst": false 00:20:41.438 }, 00:20:41.438 "method": "bdev_nvme_attach_controller" 00:20:41.438 },{ 00:20:41.438 "params": { 00:20:41.438 "name": "Nvme6", 00:20:41.438 "trtype": "tcp", 00:20:41.438 "traddr": "10.0.0.2", 00:20:41.438 "adrfam": "ipv4", 00:20:41.438 "trsvcid": "4420", 00:20:41.438 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:41.438 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:41.438 "hdgst": false, 00:20:41.438 "ddgst": false 00:20:41.438 }, 00:20:41.438 "method": "bdev_nvme_attach_controller" 00:20:41.438 },{ 00:20:41.438 "params": { 00:20:41.438 "name": "Nvme7", 00:20:41.438 "trtype": "tcp", 00:20:41.438 "traddr": "10.0.0.2", 00:20:41.438 "adrfam": "ipv4", 00:20:41.438 "trsvcid": "4420", 00:20:41.438 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:41.438 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:41.438 "hdgst": false, 00:20:41.438 "ddgst": false 00:20:41.438 }, 00:20:41.438 "method": "bdev_nvme_attach_controller" 00:20:41.438 },{ 00:20:41.438 "params": { 00:20:41.438 "name": "Nvme8", 00:20:41.438 "trtype": "tcp", 00:20:41.438 "traddr": "10.0.0.2", 00:20:41.438 "adrfam": "ipv4", 00:20:41.438 "trsvcid": "4420", 00:20:41.438 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:41.438 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:41.438 "hdgst": false, 00:20:41.438 "ddgst": false 00:20:41.438 }, 00:20:41.438 "method": "bdev_nvme_attach_controller" 00:20:41.438 },{ 00:20:41.438 "params": { 00:20:41.438 "name": "Nvme9", 00:20:41.438 "trtype": "tcp", 00:20:41.438 "traddr": "10.0.0.2", 00:20:41.438 "adrfam": "ipv4", 00:20:41.438 "trsvcid": "4420", 00:20:41.438 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:41.438 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:41.438 "hdgst": false, 00:20:41.438 "ddgst": false 00:20:41.438 }, 00:20:41.438 "method": "bdev_nvme_attach_controller" 00:20:41.438 },{ 00:20:41.438 "params": { 00:20:41.438 "name": "Nvme10", 00:20:41.438 "trtype": "tcp", 00:20:41.438 "traddr": "10.0.0.2", 00:20:41.438 "adrfam": "ipv4", 00:20:41.438 "trsvcid": "4420", 00:20:41.438 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:41.438 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:41.438 "hdgst": false, 00:20:41.438 "ddgst": false 00:20:41.438 }, 00:20:41.438 "method": "bdev_nvme_attach_controller" 00:20:41.438 }' 00:20:41.438 [2024-12-10 04:08:35.615271] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:41.439 [2024-12-10 04:08:35.615347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2439315 ] 00:20:41.439 [2024-12-10 04:08:35.687859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.439 [2024-12-10 04:08:35.746898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.336 Running I/O for 10 seconds... 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:43.336 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.594 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.594 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:43.594 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:43.594 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:43.852 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:43.852 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:43.852 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:43.852 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:43.852 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.852 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.852 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.852 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:43.852 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:43.852 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:44.110 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:44.110 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:44.110 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=135 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2439315 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2439315 ']' 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2439315 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2439315 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2439315' 00:20:44.111 killing process with pid 2439315 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2439315 00:20:44.111 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2439315 00:20:44.111 Received shutdown signal, test time was about 0.989919 seconds 00:20:44.111 00:20:44.111 Latency(us) 00:20:44.111 [2024-12-10T03:08:38.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.111 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.111 Verification LBA range: start 0x0 length 0x400 00:20:44.111 Nvme1n1 : 0.99 263.69 16.48 0.00 0.00 239724.48 2645.71 237677.23 00:20:44.111 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.111 Verification LBA range: start 0x0 length 0x400 00:20:44.111 Nvme2n1 : 0.98 266.02 16.63 0.00 0.00 232406.54 7621.59 250104.79 00:20:44.111 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.111 Verification LBA range: start 0x0 length 0x400 00:20:44.111 Nvme3n1 : 0.98 262.53 16.41 0.00 0.00 231885.18 18835.53 253211.69 00:20:44.111 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.111 Verification LBA range: start 0x0 length 0x400 00:20:44.111 Nvme4n1 : 0.98 262.00 16.38 0.00 0.00 227542.47 18544.26 254765.13 00:20:44.111 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.111 Verification LBA range: start 0x0 length 0x400 00:20:44.111 Nvme5n1 : 0.95 206.90 12.93 0.00 0.00 278535.82 3349.62 236123.78 00:20:44.111 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.111 Verification LBA range: start 0x0 length 0x400 00:20:44.111 Nvme6n1 : 0.99 258.83 16.18 0.00 0.00 221631.91 22719.15 267192.70 00:20:44.111 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.111 Verification LBA range: start 0x0 length 0x400 00:20:44.111 Nvme7n1 : 0.94 203.92 12.75 0.00 0.00 273713.43 39612.87 239230.67 00:20:44.111 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.111 Verification LBA range: start 0x0 length 0x400 00:20:44.111 Nvme8n1 : 0.95 202.04 12.63 0.00 0.00 270843.95 20583.16 254765.13 00:20:44.111 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.111 Verification LBA range: start 0x0 length 0x400 00:20:44.111 Nvme9n1 : 0.96 199.36 12.46 0.00 0.00 269336.78 34175.81 271853.04 00:20:44.111 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.111 Verification LBA range: start 0x0 length 0x400 00:20:44.111 Nvme10n1 : 0.97 198.54 12.41 0.00 0.00 264686.05 19029.71 287387.50 00:20:44.111 [2024-12-10T03:08:38.500Z] =================================================================================================================== 00:20:44.111 [2024-12-10T03:08:38.500Z] Total : 2323.82 145.24 0.00 0.00 248135.17 2645.71 287387.50 00:20:44.369 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:45.301 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2439175 00:20:45.301 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:45.301 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:45.301 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:45.301 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:45.301 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:45.301 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:45.301 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:45.301 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:45.301 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:45.301 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:45.301 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:45.301 rmmod nvme_tcp 00:20:45.560 rmmod nvme_fabrics 00:20:45.560 rmmod nvme_keyring 00:20:45.560 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:45.560 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:45.560 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:45.560 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2439175 ']' 00:20:45.560 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2439175 00:20:45.560 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2439175 ']' 00:20:45.560 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2439175 00:20:45.560 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:45.560 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.560 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2439175 00:20:45.560 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:45.560 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:45.560 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2439175' 00:20:45.560 killing process with pid 2439175 00:20:45.560 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2439175 00:20:45.560 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2439175 00:20:46.128 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:46.128 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:46.128 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:46.128 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:46.128 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:46.128 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:46.128 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:46.128 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:46.128 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:46.128 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.128 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.128 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:48.033 00:20:48.033 real 0m7.755s 00:20:48.033 user 0m23.891s 00:20:48.033 sys 0m1.544s 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:48.033 ************************************ 00:20:48.033 END TEST nvmf_shutdown_tc2 00:20:48.033 ************************************ 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:48.033 ************************************ 00:20:48.033 START TEST nvmf_shutdown_tc3 00:20:48.033 ************************************ 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:48.033 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:48.033 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:48.033 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.033 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:48.034 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:48.034 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:48.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:20:48.292 00:20:48.292 --- 10.0.0.2 ping statistics --- 00:20:48.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.292 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:48.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:20:48.292 00:20:48.292 --- 10.0.0.1 ping statistics --- 00:20:48.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.292 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2440230 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2440230 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2440230 ']' 00:20:48.292 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.293 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.293 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.293 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.293 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.293 [2024-12-10 04:08:42.553066] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:48.293 [2024-12-10 04:08:42.553157] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.293 [2024-12-10 04:08:42.632051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:48.551 [2024-12-10 04:08:42.695221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.551 [2024-12-10 04:08:42.695270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.551 [2024-12-10 04:08:42.695292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.551 [2024-12-10 04:08:42.695309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.551 [2024-12-10 04:08:42.695323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.551 [2024-12-10 04:08:42.696992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.551 [2024-12-10 04:08:42.697071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.551 [2024-12-10 04:08:42.697136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:48.551 [2024-12-10 04:08:42.697140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.551 [2024-12-10 04:08:42.849556] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.551 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.551 Malloc1 00:20:48.809 [2024-12-10 04:08:42.947919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.809 Malloc2 00:20:48.809 Malloc3 00:20:48.809 Malloc4 00:20:48.809 Malloc5 00:20:48.809 Malloc6 00:20:49.067 Malloc7 00:20:49.067 Malloc8 00:20:49.067 Malloc9 00:20:49.067 Malloc10 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2440405 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2440405 /var/tmp/bdevperf.sock 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2440405 ']' 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:49.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.067 { 00:20:49.067 "params": { 00:20:49.067 "name": "Nvme$subsystem", 00:20:49.067 "trtype": "$TEST_TRANSPORT", 00:20:49.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.067 "adrfam": "ipv4", 00:20:49.067 "trsvcid": "$NVMF_PORT", 00:20:49.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.067 "hdgst": ${hdgst:-false}, 00:20:49.067 "ddgst": ${ddgst:-false} 00:20:49.067 }, 00:20:49.067 "method": "bdev_nvme_attach_controller" 00:20:49.067 } 00:20:49.067 EOF 00:20:49.067 )") 00:20:49.067 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.068 { 00:20:49.068 "params": { 00:20:49.068 "name": "Nvme$subsystem", 00:20:49.068 "trtype": "$TEST_TRANSPORT", 00:20:49.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.068 "adrfam": "ipv4", 00:20:49.068 "trsvcid": "$NVMF_PORT", 00:20:49.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.068 "hdgst": ${hdgst:-false}, 00:20:49.068 "ddgst": ${ddgst:-false} 00:20:49.068 }, 00:20:49.068 "method": "bdev_nvme_attach_controller" 00:20:49.068 } 00:20:49.068 EOF 00:20:49.068 )") 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.068 { 00:20:49.068 "params": { 00:20:49.068 "name": "Nvme$subsystem", 00:20:49.068 "trtype": "$TEST_TRANSPORT", 00:20:49.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.068 "adrfam": "ipv4", 00:20:49.068 "trsvcid": "$NVMF_PORT", 00:20:49.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.068 "hdgst": ${hdgst:-false}, 00:20:49.068 "ddgst": ${ddgst:-false} 00:20:49.068 }, 00:20:49.068 "method": "bdev_nvme_attach_controller" 00:20:49.068 } 00:20:49.068 EOF 00:20:49.068 )") 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.068 { 00:20:49.068 "params": { 00:20:49.068 "name": "Nvme$subsystem", 00:20:49.068 "trtype": "$TEST_TRANSPORT", 00:20:49.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.068 "adrfam": "ipv4", 00:20:49.068 "trsvcid": "$NVMF_PORT", 00:20:49.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.068 "hdgst": ${hdgst:-false}, 00:20:49.068 "ddgst": ${ddgst:-false} 00:20:49.068 }, 00:20:49.068 "method": "bdev_nvme_attach_controller" 00:20:49.068 } 00:20:49.068 EOF 00:20:49.068 )") 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.068 { 00:20:49.068 "params": { 00:20:49.068 "name": "Nvme$subsystem", 00:20:49.068 "trtype": "$TEST_TRANSPORT", 00:20:49.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.068 "adrfam": "ipv4", 00:20:49.068 "trsvcid": "$NVMF_PORT", 00:20:49.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.068 "hdgst": ${hdgst:-false}, 00:20:49.068 "ddgst": ${ddgst:-false} 00:20:49.068 }, 00:20:49.068 "method": "bdev_nvme_attach_controller" 00:20:49.068 } 00:20:49.068 EOF 00:20:49.068 )") 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.068 { 00:20:49.068 "params": { 00:20:49.068 "name": "Nvme$subsystem", 00:20:49.068 "trtype": "$TEST_TRANSPORT", 00:20:49.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.068 "adrfam": "ipv4", 00:20:49.068 "trsvcid": "$NVMF_PORT", 00:20:49.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.068 "hdgst": ${hdgst:-false}, 00:20:49.068 "ddgst": ${ddgst:-false} 00:20:49.068 }, 00:20:49.068 "method": "bdev_nvme_attach_controller" 00:20:49.068 } 00:20:49.068 EOF 00:20:49.068 )") 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.068 { 00:20:49.068 "params": { 00:20:49.068 "name": "Nvme$subsystem", 00:20:49.068 "trtype": "$TEST_TRANSPORT", 00:20:49.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.068 "adrfam": "ipv4", 00:20:49.068 "trsvcid": "$NVMF_PORT", 00:20:49.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.068 "hdgst": ${hdgst:-false}, 00:20:49.068 "ddgst": ${ddgst:-false} 00:20:49.068 }, 00:20:49.068 "method": "bdev_nvme_attach_controller" 00:20:49.068 } 00:20:49.068 EOF 00:20:49.068 )") 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.068 { 00:20:49.068 "params": { 00:20:49.068 "name": "Nvme$subsystem", 00:20:49.068 "trtype": "$TEST_TRANSPORT", 00:20:49.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.068 "adrfam": "ipv4", 00:20:49.068 "trsvcid": "$NVMF_PORT", 00:20:49.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.068 "hdgst": ${hdgst:-false}, 00:20:49.068 "ddgst": ${ddgst:-false} 00:20:49.068 }, 00:20:49.068 "method": "bdev_nvme_attach_controller" 00:20:49.068 } 00:20:49.068 EOF 00:20:49.068 )") 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.068 { 00:20:49.068 "params": { 00:20:49.068 "name": "Nvme$subsystem", 00:20:49.068 "trtype": "$TEST_TRANSPORT", 00:20:49.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.068 "adrfam": "ipv4", 00:20:49.068 "trsvcid": "$NVMF_PORT", 00:20:49.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.068 "hdgst": ${hdgst:-false}, 00:20:49.068 "ddgst": ${ddgst:-false} 00:20:49.068 }, 00:20:49.068 "method": "bdev_nvme_attach_controller" 00:20:49.068 } 00:20:49.068 EOF 00:20:49.068 )") 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.068 { 00:20:49.068 "params": { 00:20:49.068 "name": "Nvme$subsystem", 00:20:49.068 "trtype": "$TEST_TRANSPORT", 00:20:49.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.068 "adrfam": "ipv4", 00:20:49.068 "trsvcid": "$NVMF_PORT", 00:20:49.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.068 "hdgst": ${hdgst:-false}, 00:20:49.068 "ddgst": ${ddgst:-false} 00:20:49.068 }, 00:20:49.068 "method": "bdev_nvme_attach_controller" 00:20:49.068 } 00:20:49.068 EOF 00:20:49.068 )") 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:49.068 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:49.068 "params": { 00:20:49.068 "name": "Nvme1", 00:20:49.068 "trtype": "tcp", 00:20:49.068 "traddr": "10.0.0.2", 00:20:49.068 "adrfam": "ipv4", 00:20:49.068 "trsvcid": "4420", 00:20:49.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.068 "hdgst": false, 00:20:49.068 "ddgst": false 00:20:49.068 }, 00:20:49.068 "method": "bdev_nvme_attach_controller" 00:20:49.068 },{ 00:20:49.068 "params": { 00:20:49.068 "name": "Nvme2", 00:20:49.068 "trtype": "tcp", 00:20:49.068 "traddr": "10.0.0.2", 00:20:49.068 "adrfam": "ipv4", 00:20:49.068 "trsvcid": "4420", 00:20:49.068 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:49.068 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:49.068 "hdgst": false, 00:20:49.068 "ddgst": false 00:20:49.068 }, 00:20:49.068 "method": "bdev_nvme_attach_controller" 00:20:49.068 },{ 00:20:49.068 "params": { 00:20:49.068 "name": "Nvme3", 00:20:49.068 "trtype": "tcp", 00:20:49.068 "traddr": "10.0.0.2", 00:20:49.068 "adrfam": "ipv4", 00:20:49.068 "trsvcid": "4420", 00:20:49.068 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:49.068 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:49.068 "hdgst": false, 00:20:49.068 "ddgst": false 00:20:49.068 }, 00:20:49.068 "method": "bdev_nvme_attach_controller" 00:20:49.068 },{ 00:20:49.068 "params": { 00:20:49.068 "name": "Nvme4", 00:20:49.068 "trtype": "tcp", 00:20:49.068 "traddr": "10.0.0.2", 00:20:49.068 "adrfam": "ipv4", 00:20:49.068 "trsvcid": "4420", 00:20:49.068 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:49.068 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:49.068 "hdgst": false, 00:20:49.069 "ddgst": false 00:20:49.069 }, 00:20:49.069 "method": "bdev_nvme_attach_controller" 00:20:49.069 },{ 00:20:49.069 "params": { 00:20:49.069 "name": "Nvme5", 00:20:49.069 "trtype": "tcp", 00:20:49.069 "traddr": "10.0.0.2", 00:20:49.069 "adrfam": "ipv4", 00:20:49.069 "trsvcid": "4420", 00:20:49.069 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:49.069 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:49.069 "hdgst": false, 00:20:49.069 "ddgst": false 00:20:49.069 }, 00:20:49.069 "method": "bdev_nvme_attach_controller" 00:20:49.069 },{ 00:20:49.069 "params": { 00:20:49.069 "name": "Nvme6", 00:20:49.069 "trtype": "tcp", 00:20:49.069 "traddr": "10.0.0.2", 00:20:49.069 "adrfam": "ipv4", 00:20:49.069 "trsvcid": "4420", 00:20:49.069 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:49.069 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:49.069 "hdgst": false, 00:20:49.069 "ddgst": false 00:20:49.069 }, 00:20:49.069 "method": "bdev_nvme_attach_controller" 00:20:49.069 },{ 00:20:49.069 "params": { 00:20:49.069 "name": "Nvme7", 00:20:49.069 "trtype": "tcp", 00:20:49.069 "traddr": "10.0.0.2", 00:20:49.069 "adrfam": "ipv4", 00:20:49.069 "trsvcid": "4420", 00:20:49.069 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:49.069 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:49.069 "hdgst": false, 00:20:49.069 "ddgst": false 00:20:49.069 }, 00:20:49.069 "method": "bdev_nvme_attach_controller" 00:20:49.069 },{ 00:20:49.069 "params": { 00:20:49.069 "name": "Nvme8", 00:20:49.069 "trtype": "tcp", 00:20:49.069 "traddr": "10.0.0.2", 00:20:49.069 "adrfam": "ipv4", 00:20:49.069 "trsvcid": "4420", 00:20:49.069 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:49.069 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:49.069 "hdgst": false, 00:20:49.069 "ddgst": false 00:20:49.069 }, 00:20:49.069 "method": "bdev_nvme_attach_controller" 00:20:49.069 },{ 00:20:49.069 "params": { 00:20:49.069 "name": "Nvme9", 00:20:49.069 "trtype": "tcp", 00:20:49.069 "traddr": "10.0.0.2", 00:20:49.069 "adrfam": "ipv4", 00:20:49.069 "trsvcid": "4420", 00:20:49.069 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:49.069 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:49.069 "hdgst": false, 00:20:49.069 "ddgst": false 00:20:49.069 }, 00:20:49.069 "method": "bdev_nvme_attach_controller" 00:20:49.069 },{ 00:20:49.069 "params": { 00:20:49.069 "name": "Nvme10", 00:20:49.069 "trtype": "tcp", 00:20:49.069 "traddr": "10.0.0.2", 00:20:49.069 "adrfam": "ipv4", 00:20:49.069 "trsvcid": "4420", 00:20:49.069 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:49.069 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:49.069 "hdgst": false, 00:20:49.069 "ddgst": false 00:20:49.069 }, 00:20:49.069 "method": "bdev_nvme_attach_controller" 00:20:49.069 }' 00:20:49.326 [2024-12-10 04:08:43.454792] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:49.327 [2024-12-10 04:08:43.454905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2440405 ] 00:20:49.327 [2024-12-10 04:08:43.527980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.327 [2024-12-10 04:08:43.587596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.224 Running I/O for 10 seconds... 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:51.224 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:51.483 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:51.483 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:51.483 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:51.483 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:51.483 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.483 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:51.483 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.483 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:51.483 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:51.483 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=135 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2440230 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2440230 ']' 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2440230 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2440230 00:20:52.017 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:52.017 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:52.017 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2440230' 00:20:52.017 killing process with pid 2440230 00:20:52.017 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2440230 00:20:52.017 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2440230 00:20:52.017 [2024-12-10 04:08:46.152857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.152991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.017 [2024-12-10 04:08:46.153730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.153742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.153754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.153766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.153778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.153789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e70 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.158404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.158453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.158484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.158500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.158517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.158518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.158539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.158566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.158567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.158581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.158605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.158603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.158620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.158628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.158636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.158650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.158650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.158676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.158682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.158690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.158708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1[2024-12-10 04:08:46.158706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.158725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.158736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with [2024-12-10 04:08:46.158741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:1the state(6) to be set 00:20:52.018 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.158757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.158759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.158773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.158782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with [2024-12-10 04:08:46.158787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:52.018 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.158805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1[2024-12-10 04:08:46.158804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.158827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.158831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.158844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.158852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.158859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.158877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1[2024-12-10 04:08:46.158873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.158894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.158898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.158909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.158919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with [2024-12-10 04:08:46.158924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:52.018 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.158943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.158942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.158957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.158964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.158973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.158988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.158985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.159005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.159010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.159020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.159031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with [2024-12-10 04:08:46.159036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1the state(6) to be set 00:20:52.018 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.159053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.159055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.159073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.159091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.159103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.159112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.159119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.159134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.159133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.159150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.159155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.159164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.159176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with [2024-12-10 04:08:46.159180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:1the state(6) to be set 00:20:52.018 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.159198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.159199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.159213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 [2024-12-10 04:08:46.159220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.018 [2024-12-10 04:08:46.159228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.018 [2024-12-10 04:08:46.159244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1[2024-12-10 04:08:46.159242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.018 the state(6) to be set 00:20:52.019 [2024-12-10 04:08:46.159261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.019 [2024-12-10 04:08:46.159276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.019 [2024-12-10 04:08:46.159306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.019 [2024-12-10 04:08:46.159323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.019 [2024-12-10 04:08:46.159339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.019 [2024-12-10 04:08:46.159369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.019 [2024-12-10 04:08:46.159383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-12-10 04:08:46.159395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 the state(6) to be set 00:20:52.019 [2024-12-10 04:08:46.159414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.019 [2024-12-10 04:08:46.159430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with [2024-12-10 04:08:46.159444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:52.019 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.019 [2024-12-10 04:08:46.159477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.019 [2024-12-10 04:08:46.159493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.019 [2024-12-10 04:08:46.159523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.019 [2024-12-10 04:08:46.159538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with [2024-12-10 04:08:46.159596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1the state(6) to be set 00:20:52.019 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.019 [2024-12-10 04:08:46.159632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with [2024-12-10 04:08:46.159647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:52.019 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.019 [2024-12-10 04:08:46.159679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with the state(6) to be set 00:20:52.019 [2024-12-10 04:08:46.159700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9bc0 is same with [2024-12-10 04:08:46.159714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:52.019 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.159985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.159998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.160013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.160026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.160042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.160056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.160070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.160084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.160099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.160112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.160127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.160140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.160155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.019 [2024-12-10 04:08:46.160169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.019 [2024-12-10 04:08:46.160185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.160198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.160213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.160226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.160241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.160254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.160273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.160287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.160303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.160316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.160331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.160345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.160361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.160374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.160390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.160403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.160418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.160432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.160446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.160460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.160475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.160488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.160504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.160517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.160596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:52.020 [2024-12-10 04:08:46.160762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.160999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.161108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.161120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:12[2024-12-10 04:08:46.161141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with [2024-12-10 04:08:46.161157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:52.020 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.161173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.161187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.161199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with [2024-12-10 04:08:46.161211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:12the state(6) to be set 00:20:52.020 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.161225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.161237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.161249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 04:08:46.161261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.161288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 [2024-12-10 04:08:46.161300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.020 [2024-12-10 04:08:46.161312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 04:08:46.161325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.020 the state(6) to be set 00:20:52.020 [2024-12-10 04:08:46.161339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with [2024-12-10 04:08:46.161403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:12the state(6) to be set 00:20:52.021 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:1[2024-12-10 04:08:46.161500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 04:08:46.161515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa090 is same with the state(6) to be set 00:20:52.021 [2024-12-10 04:08:46.161641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.161973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.161989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.162002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.162017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.162031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.162046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.021 [2024-12-10 04:08:46.162060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.021 [2024-12-10 04:08:46.162075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.162974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.162987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.163002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.163007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with [2024-12-10 04:08:46.163015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:52.022 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.163032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with [2024-12-10 04:08:46.163034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1the state(6) to be set 00:20:52.022 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.163048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with [2024-12-10 04:08:46.163049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:52.022 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.163065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.022 [2024-12-10 04:08:46.163069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.163076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.022 [2024-12-10 04:08:46.163084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.163088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.022 [2024-12-10 04:08:46.163100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1[2024-12-10 04:08:46.163100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 the state(6) to be set 00:20:52.022 [2024-12-10 04:08:46.163115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 04:08:46.163115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 the state(6) to be set 00:20:52.022 [2024-12-10 04:08:46.163130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.022 [2024-12-10 04:08:46.163132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.163141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.022 [2024-12-10 04:08:46.163146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.022 [2024-12-10 04:08:46.163153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.022 [2024-12-10 04:08:46.163162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.022 [2024-12-10 04:08:46.163165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 04:08:46.163177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.023 the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ [2024-12-10 04:08:46.163212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with transport error -6 (No such device or address) on qpair id 1 00:20:52.023 the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa410 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.163977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.023 [2024-12-10 04:08:46.164002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.023 [2024-12-10 04:08:46.164018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.023 [2024-12-10 04:08:46.164032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.023 [2024-12-10 04:08:46.164046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.023 [2024-12-10 04:08:46.164059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.023 [2024-12-10 04:08:46.164073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.023 [2024-12-10 04:08:46.164094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.023 [2024-12-10 04:08:46.164108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0460 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.164187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.023 [2024-12-10 04:08:46.164209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.023 [2024-12-10 04:08:46.164229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.023 [2024-12-10 04:08:46.164244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.023 [2024-12-10 04:08:46.164258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.023 [2024-12-10 04:08:46.164271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.023 [2024-12-10 04:08:46.164285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.023 [2024-12-10 04:08:46.164298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.023 [2024-12-10 04:08:46.164310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c406d0 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.164378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.023 [2024-12-10 04:08:46.164399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.023 [2024-12-10 04:08:46.164414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.023 [2024-12-10 04:08:46.164427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.023 [2024-12-10 04:08:46.164441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.023 [2024-12-10 04:08:46.164454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.023 [2024-12-10 04:08:46.164468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.023 [2024-12-10 04:08:46.164481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.023 [2024-12-10 04:08:46.164494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edbc0 is same with the state(6) to be set 00:20:52.023 [2024-12-10 04:08:46.164538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.023 [2024-12-10 04:08:46.164674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.023 [2024-12-10 04:08:46.164694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.023 [2024-12-10 04:08:46.164708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.023 [2024-12-10 04:08:46.164722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.023 [2024-12-10 04:08:46.164735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.164748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.024 [2024-12-10 04:08:46.164761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.164773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed490 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.164825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.024 [2024-12-10 04:08:46.164845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.164860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.024 [2024-12-10 04:08:46.164879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.164893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.024 [2024-12-10 04:08:46.164907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.164921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.024 [2024-12-10 04:08:46.164934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.164946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0e80 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.164984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.024 [2024-12-10 04:08:46.165003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.024 [2024-12-10 04:08:46.165031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.024 [2024-12-10 04:08:46.165057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.024 [2024-12-10 04:08:46.165083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f1310 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 [2024-12-10 04:08:46.165175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 [2024-12-10 04:08:46.165210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 [2024-12-10 04:08:46.165240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 [2024-12-10 04:08:46.165269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 [2024-12-10 04:08:46.165305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 [2024-12-10 04:08:46.165334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with [2024-12-10 04:08:46.165349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:1the state(6) to be set 00:20:52.024 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 [2024-12-10 04:08:46.165365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 [2024-12-10 04:08:46.165386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 [2024-12-10 04:08:46.165413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 [2024-12-10 04:08:46.165453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 [2024-12-10 04:08:46.165479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with [2024-12-10 04:08:46.165505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:1the state(6) to be set 00:20:52.024 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 [2024-12-10 04:08:46.165527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 04:08:46.165526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:1[2024-12-10 04:08:46.165543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 04:08:46.165571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 [2024-12-10 04:08:46.165601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with [2024-12-10 04:08:46.165602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:52.024 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 [2024-12-10 04:08:46.165628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 [2024-12-10 04:08:46.165653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 04:08:46.165665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.024 [2024-12-10 04:08:46.165700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.024 [2024-12-10 04:08:46.165713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.024 [2024-12-10 04:08:46.165725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with [2024-12-10 04:08:46.165726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1the state(6) to be set 00:20:52.024 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 [2024-12-10 04:08:46.165744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with [2024-12-10 04:08:46.165745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:52.025 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 [2024-12-10 04:08:46.165758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 [2024-12-10 04:08:46.165771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 [2024-12-10 04:08:46.165783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 [2024-12-10 04:08:46.165796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 04:08:46.165809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 [2024-12-10 04:08:46.165835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 [2024-12-10 04:08:46.165863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 [2024-12-10 04:08:46.165876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 [2024-12-10 04:08:46.165888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with [2024-12-10 04:08:46.165900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1the state(6) to be set 00:20:52.025 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 [2024-12-10 04:08:46.165914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 [2024-12-10 04:08:46.165926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 [2024-12-10 04:08:46.165941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 [2024-12-10 04:08:46.165953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 [2024-12-10 04:08:46.165965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 [2024-12-10 04:08:46.165989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.165995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 [2024-12-10 04:08:46.166001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 [2024-12-10 04:08:46.166013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1[2024-12-10 04:08:46.166025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with [2024-12-10 04:08:46.166039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:52.025 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 [2024-12-10 04:08:46.166052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 [2024-12-10 04:08:46.166064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 [2024-12-10 04:08:46.166076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1[2024-12-10 04:08:46.166088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 04:08:46.166102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 [2024-12-10 04:08:46.166131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 [2024-12-10 04:08:46.166143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 [2024-12-10 04:08:46.166154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 [2024-12-10 04:08:46.166166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 [2024-12-10 04:08:46.166188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 [2024-12-10 04:08:46.166200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 [2024-12-10 04:08:46.166213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa790 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.166228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 [2024-12-10 04:08:46.166243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 [2024-12-10 04:08:46.166257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 [2024-12-10 04:08:46.166271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.025 [2024-12-10 04:08:46.166285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.025 [2024-12-10 04:08:46.167385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.167419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.167442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.167465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.167486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.167513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.167535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.167580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.167607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.167628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.167650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.167679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.025 [2024-12-10 04:08:46.167699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.167721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.167742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.167763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.167784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.167804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.167839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.167859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.167892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.167911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.167929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.167949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.167968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.167987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.168786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fac60 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.169996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.170007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.026 [2024-12-10 04:08:46.170023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.170462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15899a0 is same with the state(6) to be set 00:20:52.027 [2024-12-10 04:08:46.183775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.183862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.183882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.183898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.183914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.183928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.183945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.183959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.183975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.183988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.027 [2024-12-10 04:08:46.184508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.027 [2024-12-10 04:08:46.184521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.184537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.184561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.184589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.184603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.184618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.184632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.184648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.184662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.184678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.184692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.184707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.184721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.184737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.184751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.187978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.187994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.188009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.188024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.188038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.188054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.188068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.188084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.188098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.188114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.188128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.188147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.188162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.188177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.188191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.188206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.188220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.188235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.188249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.188265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-10 04:08:46.188279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.028 [2024-12-10 04:08:46.188294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.188980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.188996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.189009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.189025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.189038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.189053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.189067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.189083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.189096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.189112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.189125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.189141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.189156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.189171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.189185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.189201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-10 04:08:46.189214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.189881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:52.029 [2024-12-10 04:08:46.189925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:52.029 [2024-12-10 04:08:46.189966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ed490 (9): Bad file descriptor 00:20:52.029 [2024-12-10 04:08:46.189998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f0e80 (9): Bad file descriptor 00:20:52.029 [2024-12-10 04:08:46.190076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f0460 (9): Bad file descriptor 00:20:52.029 [2024-12-10 04:08:46.190128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.029 [2024-12-10 04:08:46.190148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.190164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.029 [2024-12-10 04:08:46.190176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.190191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.029 [2024-12-10 04:08:46.190203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.190216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.029 [2024-12-10 04:08:46.190229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.190241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1759110 is same with the state(6) to be set 00:20:52.029 [2024-12-10 04:08:46.190293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.029 [2024-12-10 04:08:46.190313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.190328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.029 [2024-12-10 04:08:46.190341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.190354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.029 [2024-12-10 04:08:46.190367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.029 [2024-12-10 04:08:46.190380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.030 [2024-12-10 04:08:46.190393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.030 [2024-12-10 04:08:46.190405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5e5a0 is same with the state(6) to be set 00:20:52.030 [2024-12-10 04:08:46.190435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c406d0 (9): Bad file descriptor 00:20:52.030 [2024-12-10 04:08:46.190479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.030 [2024-12-10 04:08:46.190498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.030 [2024-12-10 04:08:46.190513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.030 [2024-12-10 04:08:46.190526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.030 [2024-12-10 04:08:46.190539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.030 [2024-12-10 04:08:46.190567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.030 [2024-12-10 04:08:46.190582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.030 [2024-12-10 04:08:46.190595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.030 [2024-12-10 04:08:46.190608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ac60 is same with the state(6) to be set 00:20:52.030 [2024-12-10 04:08:46.190657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.030 [2024-12-10 04:08:46.190677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.030 [2024-12-10 04:08:46.190691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.030 [2024-12-10 04:08:46.190704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.030 [2024-12-10 04:08:46.190718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.030 [2024-12-10 04:08:46.190731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.030 [2024-12-10 04:08:46.190744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.030 [2024-12-10 04:08:46.190756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.030 [2024-12-10 04:08:46.190769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11fa0 is same with the state(6) to be set 00:20:52.030 [2024-12-10 04:08:46.190799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edbc0 (9): Bad file descriptor 00:20:52.030 [2024-12-10 04:08:46.190826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f1310 (9): Bad file descriptor 00:20:52.030 [2024-12-10 04:08:46.193475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:52.030 [2024-12-10 04:08:46.193510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:52.030 [2024-12-10 04:08:46.194701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.030 [2024-12-10 04:08:46.194734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f0e80 with addr=10.0.0.2, port=4420 00:20:52.030 [2024-12-10 04:08:46.194753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0e80 is same with the state(6) to be set 00:20:52.030 [2024-12-10 04:08:46.194839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.030 [2024-12-10 04:08:46.194864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ed490 with addr=10.0.0.2, port=4420 00:20:52.030 [2024-12-10 04:08:46.194880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed490 is same with the state(6) to be set 00:20:52.030 [2024-12-10 04:08:46.194964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.030 [2024-12-10 04:08:46.194989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f1310 with addr=10.0.0.2, port=4420 00:20:52.030 [2024-12-10 04:08:46.195004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f1310 is same with the state(6) to be set 00:20:52.030 [2024-12-10 04:08:46.195079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.030 [2024-12-10 04:08:46.195103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edbc0 with addr=10.0.0.2, port=4420 00:20:52.030 [2024-12-10 04:08:46.195126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edbc0 is same with the state(6) to be set 00:20:52.030 [2024-12-10 04:08:46.195770] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:52.030 [2024-12-10 04:08:46.196091] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:52.030 [2024-12-10 04:08:46.196128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f0e80 (9): Bad file descriptor 00:20:52.030 [2024-12-10 04:08:46.196153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ed490 (9): Bad file descriptor 00:20:52.030 [2024-12-10 04:08:46.196171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f1310 (9): Bad file descriptor 00:20:52.030 [2024-12-10 04:08:46.196187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edbc0 (9): Bad file descriptor 00:20:52.030 [2024-12-10 04:08:46.196324] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:52.030 [2024-12-10 04:08:46.196392] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:52.030 [2024-12-10 04:08:46.196461] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:52.030 [2024-12-10 04:08:46.196556] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:52.030 [2024-12-10 04:08:46.196604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:52.030 [2024-12-10 04:08:46.196624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:52.030 [2024-12-10 04:08:46.196643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:52.030 [2024-12-10 04:08:46.196659] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:52.030 [2024-12-10 04:08:46.196674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:52.030 [2024-12-10 04:08:46.196686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:52.030 [2024-12-10 04:08:46.196699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:52.030 [2024-12-10 04:08:46.196711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:52.030 [2024-12-10 04:08:46.196724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:52.030 [2024-12-10 04:08:46.196736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:52.030 [2024-12-10 04:08:46.196748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:52.030 [2024-12-10 04:08:46.196761] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:52.030 [2024-12-10 04:08:46.196774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:52.030 [2024-12-10 04:08:46.196786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:52.030 [2024-12-10 04:08:46.196798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:52.030 [2024-12-10 04:08:46.196809] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:52.030 [2024-12-10 04:08:46.199909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1759110 (9): Bad file descriptor 00:20:52.030 [2024-12-10 04:08:46.199961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5e5a0 (9): Bad file descriptor 00:20:52.030 [2024-12-10 04:08:46.200001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4ac60 (9): Bad file descriptor 00:20:52.030 [2024-12-10 04:08:46.200039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c11fa0 (9): Bad file descriptor 00:20:52.030 [2024-12-10 04:08:46.200187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-10 04:08:46.200214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.030 [2024-12-10 04:08:46.200243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-10 04:08:46.200259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.030 [2024-12-10 04:08:46.200275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-10 04:08:46.200289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.030 [2024-12-10 04:08:46.200305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-10 04:08:46.200319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.030 [2024-12-10 04:08:46.200335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-10 04:08:46.200349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.030 [2024-12-10 04:08:46.200364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-10 04:08:46.200378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.200948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.200971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.212860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.212923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.212941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.212956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.212973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.212987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.031 [2024-12-10 04:08:46.213530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.031 [2024-12-10 04:08:46.213551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.213570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.213585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.213600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.213615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.213630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.213644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.213660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.213678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.213695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.213709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.213725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.213739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.213754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.213768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.213784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.213798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.213814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.213828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.213844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.213857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.213873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.213887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.213903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.213917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.213933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.213948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.213964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.213978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.213994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.214008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.214024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.214037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.214057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.214072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.214088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.214102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.214119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf3410 is same with the state(6) to be set 00:20:52.032 [2024-12-10 04:08:46.215563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.215588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.215613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.215628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.215645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.215659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.215675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.215689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.215705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.215719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.215735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.215749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.215764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.215778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.215794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.215808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.215824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.215838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.215853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.215867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.215888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.215903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.215920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.215934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.215949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.215963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.215979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.215993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.216009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.216022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.216038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.216052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.216068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.216081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.216097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.216110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.216126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.216140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.216156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.216169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.216185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.032 [2024-12-10 04:08:46.216199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.032 [2024-12-10 04:08:46.216215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.216978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.216992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.217007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.217021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.217041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.217056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.217072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.217085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.217101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.217115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.217131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.217145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.217161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.217174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.217190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.217204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.217220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.217233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.217249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.217263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.217279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.217293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.217309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.217322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.217337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.217351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.217366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.217380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.217396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.217413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.033 [2024-12-10 04:08:46.217430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.033 [2024-12-10 04:08:46.217444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.217459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.217474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.217489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.217503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.217517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a31570 is same with the state(6) to be set 00:20:52.034 [2024-12-10 04:08:46.218741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:52.034 [2024-12-10 04:08:46.218774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:52.034 [2024-12-10 04:08:46.219297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.034 [2024-12-10 04:08:46.219328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f0460 with addr=10.0.0.2, port=4420 00:20:52.034 [2024-12-10 04:08:46.219345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0460 is same with the state(6) to be set 00:20:52.034 [2024-12-10 04:08:46.219466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.034 [2024-12-10 04:08:46.219491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c406d0 with addr=10.0.0.2, port=4420 00:20:52.034 [2024-12-10 04:08:46.219507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c406d0 is same with the state(6) to be set 00:20:52.034 [2024-12-10 04:08:46.219862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.219884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.219906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.219923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.219939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.219953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.219969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.219984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.034 [2024-12-10 04:08:46.220756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.034 [2024-12-10 04:08:46.220771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.220785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.220805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.220820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.220835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.220849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.220865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.220879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.220894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.220908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.220924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.220938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.220954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.220968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.220984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.220998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.221829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.221843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf46d0 is same with the state(6) to be set 00:20:52.035 [2024-12-10 04:08:46.223092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.223115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.223135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.223150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.223165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.223180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.223196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.223214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.223232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.035 [2024-12-10 04:08:46.223246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.035 [2024-12-10 04:08:46.223262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.223970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.223986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.224000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.224016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.224030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.224045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.235831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.235916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.235932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.235950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.235965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.235981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.235995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.236011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.236025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.236041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.236056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.236072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.236087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.236103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.236117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.236133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.236148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.236164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.236178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.236212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.236227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.236243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.236259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.236275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.236289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.236305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.236319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.036 [2024-12-10 04:08:46.236334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.036 [2024-12-10 04:08:46.236349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.236902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.236918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5990 is same with the state(6) to be set 00:20:52.037 [2024-12-10 04:08:46.238297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.037 [2024-12-10 04:08:46.238968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.037 [2024-12-10 04:08:46.238982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.238998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.239976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.239990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.240005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.240019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.240034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.240048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.240064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.240077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.240092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.240105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.240121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.240135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.240150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.240163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.240179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.038 [2024-12-10 04:08:46.240192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.038 [2024-12-10 04:08:46.240208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.240222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.240237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.240250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.240268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6ca0 is same with the state(6) to be set 00:20:52.039 [2024-12-10 04:08:46.241509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.241532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.241560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.241577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.241594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.241609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.241624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.241637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.241653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.241668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.241683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.241697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.241713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.241726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.241742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.241755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.241771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.241784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.241799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.241813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.241829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.241842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.241857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.241871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.241891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.241906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.241922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.241936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.241951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.241965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.241981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.241996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.039 [2024-12-10 04:08:46.242541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.039 [2024-12-10 04:08:46.242563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.242578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.242593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.242607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.242623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.242642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.242662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.242677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.242692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.242707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.242722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.242737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.242752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.242766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.242782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.242796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.242812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.242825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.242840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.242855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.242871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.242885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.242901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.242914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.242931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.242945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.242960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.242973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.242989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.243003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.243019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.243036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.243053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.243067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.243082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.243096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.243112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.243130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.243147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.243161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.243176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.243190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.243206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.243220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.243236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.243249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.243265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.243278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.243294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.243308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.243323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.243337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.243353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.243367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.243383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.243397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.243413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.243430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.243446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.040 [2024-12-10 04:08:46.243460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.040 [2024-12-10 04:08:46.243474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7f90 is same with the state(6) to be set 00:20:52.040 [2024-12-10 04:08:46.245442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:52.040 [2024-12-10 04:08:46.245480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:52.040 [2024-12-10 04:08:46.245499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:52.040 [2024-12-10 04:08:46.245516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:52.040 [2024-12-10 04:08:46.245552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:52.040 [2024-12-10 04:08:46.245577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:52.040 [2024-12-10 04:08:46.245675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f0460 (9): Bad file descriptor 00:20:52.040 [2024-12-10 04:08:46.245702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c406d0 (9): Bad file descriptor 00:20:52.040 [2024-12-10 04:08:46.245771] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:20:52.040 [2024-12-10 04:08:46.245795] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:20:52.040 [2024-12-10 04:08:46.245816] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:52.040 [2024-12-10 04:08:46.245836] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:20:52.040 [2024-12-10 04:08:46.245935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:52.040 task offset: 27520 on job bdev=Nvme2n1 fails 00:20:52.040 00:20:52.040 Latency(us) 00:20:52.040 [2024-12-10T03:08:46.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.040 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.040 Job: Nvme1n1 ended in about 0.93 seconds with error 00:20:52.040 Verification LBA range: start 0x0 length 0x400 00:20:52.040 Nvme1n1 : 0.93 206.34 12.90 68.78 0.00 230063.41 19320.98 236123.78 00:20:52.040 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.040 Job: Nvme2n1 ended in about 0.92 seconds with error 00:20:52.040 Verification LBA range: start 0x0 length 0x400 00:20:52.040 Nvme2n1 : 0.92 207.64 12.98 69.21 0.00 224026.17 26796.94 245444.46 00:20:52.040 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.040 Job: Nvme3n1 ended in about 0.93 seconds with error 00:20:52.040 Verification LBA range: start 0x0 length 0x400 00:20:52.040 Nvme3n1 : 0.93 207.40 12.96 69.13 0.00 219731.63 20194.80 256318.58 00:20:52.040 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.041 Job: Nvme4n1 ended in about 0.93 seconds with error 00:20:52.041 Verification LBA range: start 0x0 length 0x400 00:20:52.041 Nvme4n1 : 0.93 206.09 12.88 68.70 0.00 216651.28 17476.27 260978.92 00:20:52.041 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.041 Job: Nvme5n1 ended in about 0.95 seconds with error 00:20:52.041 Verification LBA range: start 0x0 length 0x400 00:20:52.041 Nvme5n1 : 0.95 134.20 8.39 67.10 0.00 290244.58 33010.73 242337.56 00:20:52.041 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.041 Job: Nvme6n1 ended in about 0.96 seconds with error 00:20:52.041 Verification LBA range: start 0x0 length 0x400 00:20:52.041 Nvme6n1 : 0.96 133.13 8.32 66.57 0.00 286755.40 32816.55 276513.37 00:20:52.041 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.041 Job: Nvme7n1 ended in about 0.98 seconds with error 00:20:52.041 Verification LBA range: start 0x0 length 0x400 00:20:52.041 Nvme7n1 : 0.98 131.08 8.19 65.54 0.00 285856.81 20777.34 253211.69 00:20:52.041 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.041 Job: Nvme8n1 ended in about 0.98 seconds with error 00:20:52.041 Verification LBA range: start 0x0 length 0x400 00:20:52.041 Nvme8n1 : 0.98 130.64 8.16 65.32 0.00 281006.08 17961.72 301368.51 00:20:52.041 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.041 Job: Nvme9n1 ended in about 0.98 seconds with error 00:20:52.041 Verification LBA range: start 0x0 length 0x400 00:20:52.041 Nvme9n1 : 0.98 130.21 8.14 65.11 0.00 276423.62 19320.98 285834.05 00:20:52.041 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.041 Job: Nvme10n1 ended in about 0.96 seconds with error 00:20:52.041 Verification LBA range: start 0x0 length 0x400 00:20:52.041 Nvme10n1 : 0.96 133.73 8.36 66.87 0.00 261853.23 34175.81 281173.71 00:20:52.041 [2024-12-10T03:08:46.430Z] =================================================================================================================== 00:20:52.041 [2024-12-10T03:08:46.430Z] Total : 1620.47 101.28 672.32 0.00 253185.56 17476.27 301368.51 00:20:52.041 [2024-12-10 04:08:46.274137] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:52.041 [2024-12-10 04:08:46.274236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:52.041 [2024-12-10 04:08:46.274598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.041 [2024-12-10 04:08:46.274636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edbc0 with addr=10.0.0.2, port=4420 00:20:52.041 [2024-12-10 04:08:46.274658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edbc0 is same with the state(6) to be set 00:20:52.041 [2024-12-10 04:08:46.274759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.041 [2024-12-10 04:08:46.274785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f1310 with addr=10.0.0.2, port=4420 00:20:52.041 [2024-12-10 04:08:46.274801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f1310 is same with the state(6) to be set 00:20:52.041 [2024-12-10 04:08:46.274883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.041 [2024-12-10 04:08:46.274907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ed490 with addr=10.0.0.2, port=4420 00:20:52.041 [2024-12-10 04:08:46.274923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed490 is same with the state(6) to be set 00:20:52.041 [2024-12-10 04:08:46.275011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.041 [2024-12-10 04:08:46.275036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f0e80 with addr=10.0.0.2, port=4420 00:20:52.041 [2024-12-10 04:08:46.275052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0e80 is same with the state(6) to be set 00:20:52.041 [2024-12-10 04:08:46.275164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.041 [2024-12-10 04:08:46.275189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1759110 with addr=10.0.0.2, port=4420 00:20:52.041 [2024-12-10 04:08:46.275222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1759110 is same with the state(6) to be set 00:20:52.041 [2024-12-10 04:08:46.275301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.041 [2024-12-10 04:08:46.275325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c11fa0 with addr=10.0.0.2, port=4420 00:20:52.041 [2024-12-10 04:08:46.275341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11fa0 is same with the state(6) to be set 00:20:52.041 [2024-12-10 04:08:46.275357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:52.041 [2024-12-10 04:08:46.275370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:52.041 [2024-12-10 04:08:46.275388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:52.041 [2024-12-10 04:08:46.275405] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:52.041 [2024-12-10 04:08:46.275422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:52.041 [2024-12-10 04:08:46.275435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:52.041 [2024-12-10 04:08:46.275447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:52.041 [2024-12-10 04:08:46.275459] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:52.041 [2024-12-10 04:08:46.276802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.041 [2024-12-10 04:08:46.276832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5e5a0 with addr=10.0.0.2, port=4420 00:20:52.041 [2024-12-10 04:08:46.276849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5e5a0 is same with the state(6) to be set 00:20:52.041 [2024-12-10 04:08:46.276920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.041 [2024-12-10 04:08:46.276944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ac60 with addr=10.0.0.2, port=4420 00:20:52.041 [2024-12-10 04:08:46.276960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ac60 is same with the state(6) to be set 00:20:52.041 [2024-12-10 04:08:46.276986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edbc0 (9): Bad file descriptor 00:20:52.041 [2024-12-10 04:08:46.277011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f1310 (9): Bad file descriptor 00:20:52.041 [2024-12-10 04:08:46.277028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ed490 (9): Bad file descriptor 00:20:52.041 [2024-12-10 04:08:46.277045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f0e80 (9): Bad file descriptor 00:20:52.041 [2024-12-10 04:08:46.277061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1759110 (9): Bad file descriptor 00:20:52.041 [2024-12-10 04:08:46.277078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c11fa0 (9): Bad file descriptor 00:20:52.041 [2024-12-10 04:08:46.277151] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:20:52.041 [2024-12-10 04:08:46.277175] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:20:52.041 [2024-12-10 04:08:46.277194] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:20:52.041 [2024-12-10 04:08:46.277216] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:20:52.041 [2024-12-10 04:08:46.277241] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:20:52.041 [2024-12-10 04:08:46.277261] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:20:52.041 [2024-12-10 04:08:46.277368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5e5a0 (9): Bad file descriptor 00:20:52.041 [2024-12-10 04:08:46.277394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4ac60 (9): Bad file descriptor 00:20:52.041 [2024-12-10 04:08:46.277410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:52.041 [2024-12-10 04:08:46.277423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:52.041 [2024-12-10 04:08:46.277436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:52.041 [2024-12-10 04:08:46.277449] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:52.041 [2024-12-10 04:08:46.277462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:52.041 [2024-12-10 04:08:46.277475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:52.041 [2024-12-10 04:08:46.277487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:52.041 [2024-12-10 04:08:46.277498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:52.041 [2024-12-10 04:08:46.277511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:52.041 [2024-12-10 04:08:46.277523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:52.041 [2024-12-10 04:08:46.277535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:52.041 [2024-12-10 04:08:46.277554] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:52.041 [2024-12-10 04:08:46.277569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:52.041 [2024-12-10 04:08:46.277581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:52.041 [2024-12-10 04:08:46.277594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:52.041 [2024-12-10 04:08:46.277605] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:52.041 [2024-12-10 04:08:46.277618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:52.041 [2024-12-10 04:08:46.277629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:52.041 [2024-12-10 04:08:46.277641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:52.041 [2024-12-10 04:08:46.277653] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:52.041 [2024-12-10 04:08:46.277666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:52.041 [2024-12-10 04:08:46.277678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:52.042 [2024-12-10 04:08:46.277693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:52.042 [2024-12-10 04:08:46.277705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:52.042 [2024-12-10 04:08:46.277785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:52.042 [2024-12-10 04:08:46.277814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:52.042 [2024-12-10 04:08:46.277847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:52.042 [2024-12-10 04:08:46.277863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:52.042 [2024-12-10 04:08:46.277876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:52.042 [2024-12-10 04:08:46.277889] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:52.042 [2024-12-10 04:08:46.277902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:52.042 [2024-12-10 04:08:46.277914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:52.042 [2024-12-10 04:08:46.277927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:52.042 [2024-12-10 04:08:46.277940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:52.042 [2024-12-10 04:08:46.278048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.042 [2024-12-10 04:08:46.278075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c406d0 with addr=10.0.0.2, port=4420 00:20:52.042 [2024-12-10 04:08:46.278091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c406d0 is same with the state(6) to be set 00:20:52.042 [2024-12-10 04:08:46.278170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.042 [2024-12-10 04:08:46.278195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f0460 with addr=10.0.0.2, port=4420 00:20:52.042 [2024-12-10 04:08:46.278210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0460 is same with the state(6) to be set 00:20:52.042 [2024-12-10 04:08:46.278257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c406d0 (9): Bad file descriptor 00:20:52.042 [2024-12-10 04:08:46.278281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f0460 (9): Bad file descriptor 00:20:52.042 [2024-12-10 04:08:46.278319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:52.042 [2024-12-10 04:08:46.278337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:52.042 [2024-12-10 04:08:46.278350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:52.042 [2024-12-10 04:08:46.278362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:52.042 [2024-12-10 04:08:46.278376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:52.042 [2024-12-10 04:08:46.278389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:52.042 [2024-12-10 04:08:46.278401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:52.042 [2024-12-10 04:08:46.278412] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:52.608 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2440405 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2440405 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2440405 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:53.547 rmmod nvme_tcp 00:20:53.547 rmmod nvme_fabrics 00:20:53.547 rmmod nvme_keyring 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2440230 ']' 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2440230 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2440230 ']' 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2440230 00:20:53.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2440230) - No such process 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2440230 is not found' 00:20:53.547 Process with pid 2440230 is not found 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.547 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.452 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:55.712 00:20:55.712 real 0m7.523s 00:20:55.712 user 0m18.584s 00:20:55.712 sys 0m1.505s 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.712 ************************************ 00:20:55.712 END TEST nvmf_shutdown_tc3 00:20:55.712 ************************************ 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:55.712 ************************************ 00:20:55.712 START TEST nvmf_shutdown_tc4 00:20:55.712 ************************************ 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:55.712 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:55.712 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.712 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:55.713 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:55.713 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:55.713 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:55.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:20:55.713 00:20:55.713 --- 10.0.0.2 ping statistics --- 00:20:55.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.713 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:20:55.713 00:20:55.713 --- 10.0.0.1 ping statistics --- 00:20:55.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.713 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2441318 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2441318 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2441318 ']' 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.713 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:55.971 [2024-12-10 04:08:50.124382] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:55.971 [2024-12-10 04:08:50.124469] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.971 [2024-12-10 04:08:50.199331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:55.971 [2024-12-10 04:08:50.258642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.971 [2024-12-10 04:08:50.258696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.971 [2024-12-10 04:08:50.258725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.971 [2024-12-10 04:08:50.258737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.971 [2024-12-10 04:08:50.258746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.971 [2024-12-10 04:08:50.260213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.971 [2024-12-10 04:08:50.260274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:55.971 [2024-12-10 04:08:50.260342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:55.971 [2024-12-10 04:08:50.260345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:56.230 [2024-12-10 04:08:50.409488] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.230 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:56.230 Malloc1 00:20:56.230 [2024-12-10 04:08:50.510416] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.230 Malloc2 00:20:56.230 Malloc3 00:20:56.489 Malloc4 00:20:56.489 Malloc5 00:20:56.489 Malloc6 00:20:56.489 Malloc7 00:20:56.489 Malloc8 00:20:56.749 Malloc9 00:20:56.749 Malloc10 00:20:56.749 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.749 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:56.749 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.749 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:56.749 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2441498 00:20:56.749 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:20:56.749 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:56.749 [2024-12-10 04:08:51.049028] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:02.028 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:02.028 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2441318 00:21:02.028 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2441318 ']' 00:21:02.028 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2441318 00:21:02.028 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:02.028 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.028 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2441318 00:21:02.028 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:02.028 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:02.028 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2441318' 00:21:02.028 killing process with pid 2441318 00:21:02.028 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2441318 00:21:02.028 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2441318 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 [2024-12-10 04:08:56.046433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 [2024-12-10 04:08:56.047461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.028 Write completed with error (sct=0, sc=8) 00:21:02.028 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 [2024-12-10 04:08:56.048644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 [2024-12-10 04:08:56.050612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.029 NVMe io qpair process completion error 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 starting I/O failed: -6 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 [2024-12-10 04:08:56.051285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 [2024-12-10 04:08:56.051335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.029 starting I/O failed: -6 00:21:02.029 [2024-12-10 04:08:56.051351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 [2024-12-10 04:08:56.051364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 [2024-12-10 04:08:56.051376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.029 [2024-12-10 04:08:56.051389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 [2024-12-10 04:08:56.051400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 [2024-12-10 04:08:56.051413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.029 starting I/O failed: -6 00:21:02.029 [2024-12-10 04:08:56.051425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 [2024-12-10 04:08:56.051437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.029 [2024-12-10 04:08:56.051449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 [2024-12-10 04:08:56.051460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 [2024-12-10 04:08:56.051472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.029 [2024-12-10 04:08:56.051484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.029 Write completed with error (sct=0, sc=8) 00:21:02.029 [2024-12-10 04:08:56.051495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with tstarting I/O failed: -6 00:21:02.029 he state(6) to be set 00:21:02.030 [2024-12-10 04:08:56.051508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with tWrite completed with error (sct=0, sc=8) 00:21:02.030 he state(6) to be set 00:21:02.030 [2024-12-10 04:08:56.051522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.030 [2024-12-10 04:08:56.051536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 [2024-12-10 04:08:56.051576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 [2024-12-10 04:08:56.051592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c930 is same with the state(6) to be set 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 [2024-12-10 04:08:56.051753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.030 starting I/O failed: -6 00:21:02.030 starting I/O failed: -6 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 [2024-12-10 04:08:56.052296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97d1a0 is same with the state(6) to be set 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 [2024-12-10 04:08:56.052327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97d1a0 is same with the state(6) to be set 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 [2024-12-10 04:08:56.052343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97d1a0 is same with tstarting I/O failed: -6 00:21:02.030 he state(6) to be set 00:21:02.030 [2024-12-10 04:08:56.052357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97d1a0 is same with tWrite completed with error (sct=0, sc=8) 00:21:02.030 he state(6) to be set 00:21:02.030 [2024-12-10 04:08:56.052370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97d1a0 is same with the state(6) to be set 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 [2024-12-10 04:08:56.052382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97d1a0 is same with the state(6) to be set 00:21:02.030 [2024-12-10 04:08:56.052394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97d1a0 is same with tWrite completed with error (sct=0, sc=8) 00:21:02.030 he state(6) to be set 00:21:02.030 starting I/O failed: -6 00:21:02.030 [2024-12-10 04:08:56.052408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97d1a0 is same with the state(6) to be set 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 [2024-12-10 04:08:56.052420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97d1a0 is same with the state(6) to be set 00:21:02.030 starting I/O failed: -6 00:21:02.030 [2024-12-10 04:08:56.052432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97d1a0 is same with the state(6) to be set 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 [2024-12-10 04:08:56.052444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97d1a0 is same with the state(6) to be set 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 [2024-12-10 04:08:56.052879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 [2024-12-10 04:08:56.053027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c460 is same with tstarting I/O failed: -6 00:21:02.030 he state(6) to be set 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 [2024-12-10 04:08:56.053064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c460 is same with the state(6) to be set 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 [2024-12-10 04:08:56.053090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c460 is same with tWrite completed with error (sct=0, sc=8) 00:21:02.030 he state(6) to be set 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 [2024-12-10 04:08:56.053113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c460 is same with the state(6) to be set 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 [2024-12-10 04:08:56.053135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c460 is same with the state(6) to be set 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 [2024-12-10 04:08:56.053158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c460 is same with tstarting I/O failed: -6 00:21:02.030 he state(6) to be set 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 [2024-12-10 04:08:56.053180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c460 is same with the state(6) to be set 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 [2024-12-10 04:08:56.053204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c460 is same with the state(6) to be set 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 [2024-12-10 04:08:56.053226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c460 is same with the state(6) to be set 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 starting I/O failed: -6 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.030 [2024-12-10 04:08:56.053994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.030 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 [2024-12-10 04:08:56.055892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.031 NVMe io qpair process completion error 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 [2024-12-10 04:08:56.057103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 Write completed with error (sct=0, sc=8) 00:21:02.031 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 [2024-12-10 04:08:56.058100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 [2024-12-10 04:08:56.059281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.032 starting I/O failed: -6 00:21:02.032 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 [2024-12-10 04:08:56.061080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.033 NVMe io qpair process completion error 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 [2024-12-10 04:08:56.062222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 [2024-12-10 04:08:56.063286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.033 starting I/O failed: -6 00:21:02.033 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 [2024-12-10 04:08:56.064477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 [2024-12-10 04:08:56.066357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.034 NVMe io qpair process completion error 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 [2024-12-10 04:08:56.067530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.034 Write completed with error (sct=0, sc=8) 00:21:02.034 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 [2024-12-10 04:08:56.068599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 [2024-12-10 04:08:56.069852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 Write completed with error (sct=0, sc=8) 00:21:02.035 starting I/O failed: -6 00:21:02.035 [2024-12-10 04:08:56.071518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.036 NVMe io qpair process completion error 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 [2024-12-10 04:08:56.072910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 [2024-12-10 04:08:56.074038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.036 starting I/O failed: -6 00:21:02.036 starting I/O failed: -6 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 [2024-12-10 04:08:56.075171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.036 Write completed with error (sct=0, sc=8) 00:21:02.036 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 [2024-12-10 04:08:56.078015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.037 NVMe io qpair process completion error 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 [2024-12-10 04:08:56.079360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.037 starting I/O failed: -6 00:21:02.037 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 [2024-12-10 04:08:56.080274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 [2024-12-10 04:08:56.081495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.038 Write completed with error (sct=0, sc=8) 00:21:02.038 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 [2024-12-10 04:08:56.084482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.039 NVMe io qpair process completion error 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 [2024-12-10 04:08:56.085768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 [2024-12-10 04:08:56.086747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 [2024-12-10 04:08:56.087935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.039 starting I/O failed: -6 00:21:02.039 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 [2024-12-10 04:08:56.090245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.040 NVMe io qpair process completion error 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 [2024-12-10 04:08:56.091582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 [2024-12-10 04:08:56.092602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.040 starting I/O failed: -6 00:21:02.040 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 [2024-12-10 04:08:56.093750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 [2024-12-10 04:08:56.095461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.041 NVMe io qpair process completion error 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 starting I/O failed: -6 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.041 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 [2024-12-10 04:08:56.096629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 [2024-12-10 04:08:56.097649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 [2024-12-10 04:08:56.099034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.042 Write completed with error (sct=0, sc=8) 00:21:02.042 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 Write completed with error (sct=0, sc=8) 00:21:02.043 starting I/O failed: -6 00:21:02.043 [2024-12-10 04:08:56.102055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.043 NVMe io qpair process completion error 00:21:02.043 Initializing NVMe Controllers 00:21:02.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:02.043 Controller IO queue size 128, less than required. 00:21:02.043 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:02.043 Controller IO queue size 128, less than required. 00:21:02.043 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.043 Controller IO queue size 128, less than required. 00:21:02.043 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:02.043 Controller IO queue size 128, less than required. 00:21:02.043 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:02.043 Controller IO queue size 128, less than required. 00:21:02.043 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:02.043 Controller IO queue size 128, less than required. 00:21:02.043 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:02.043 Controller IO queue size 128, less than required. 00:21:02.043 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:02.043 Controller IO queue size 128, less than required. 00:21:02.043 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:02.043 Controller IO queue size 128, less than required. 00:21:02.043 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:02.043 Controller IO queue size 128, less than required. 00:21:02.043 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:02.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:02.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:02.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:02.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:02.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:02.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:02.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:02.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:02.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:02.043 Initialization complete. Launching workers. 00:21:02.043 ======================================================== 00:21:02.043 Latency(us) 00:21:02.043 Device Information : IOPS MiB/s Average min max 00:21:02.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1886.31 81.05 67876.22 807.46 132809.77 00:21:02.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1838.07 78.98 69678.30 1053.15 135216.20 00:21:02.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1820.59 78.23 69547.85 1037.41 117599.38 00:21:02.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1821.43 78.26 69539.26 804.53 141881.86 00:21:02.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1778.04 76.40 71258.70 873.69 115619.94 00:21:02.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1787.31 76.80 70910.44 994.30 118220.12 00:21:02.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1798.47 77.28 70493.50 1083.77 120455.82 00:21:02.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1888.63 81.15 67151.29 902.80 122514.97 00:21:02.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1800.37 77.36 70481.90 747.25 126142.63 00:21:02.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1830.28 78.64 69368.34 1033.43 129826.05 00:21:02.043 ======================================================== 00:21:02.043 Total : 18249.50 784.16 69606.80 747.25 141881.86 00:21:02.043 00:21:02.043 [2024-12-10 04:08:56.108151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166c5f0 is same with the state(6) to be set 00:21:02.043 [2024-12-10 04:08:56.108255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166bd10 is same with the state(6) to be set 00:21:02.043 [2024-12-10 04:08:56.108315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166d720 is same with the state(6) to be set 00:21:02.044 [2024-12-10 04:08:56.108375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166d900 is same with the state(6) to be set 00:21:02.044 [2024-12-10 04:08:56.108433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166c2c0 is same with the state(6) to be set 00:21:02.044 [2024-12-10 04:08:56.108490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166b6b0 is same with the state(6) to be set 00:21:02.044 [2024-12-10 04:08:56.108601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166b9e0 is same with the state(6) to be set 00:21:02.044 [2024-12-10 04:08:56.108660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166dae0 is same with the state(6) to be set 00:21:02.044 [2024-12-10 04:08:56.108716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166c920 is same with the state(6) to be set 00:21:02.044 [2024-12-10 04:08:56.108772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166cc50 is same with the state(6) to be set 00:21:02.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:02.303 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2441498 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2441498 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2441498 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:03.238 rmmod nvme_tcp 00:21:03.238 rmmod nvme_fabrics 00:21:03.238 rmmod nvme_keyring 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2441318 ']' 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2441318 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2441318 ']' 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2441318 00:21:03.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2441318) - No such process 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2441318 is not found' 00:21:03.238 Process with pid 2441318 is not found 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:03.238 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:03.239 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:03.239 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:03.239 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.239 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:03.239 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.239 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.239 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.774 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:05.774 00:21:05.774 real 0m9.761s 00:21:05.774 user 0m23.275s 00:21:05.774 sys 0m5.881s 00:21:05.774 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.774 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:05.774 ************************************ 00:21:05.774 END TEST nvmf_shutdown_tc4 00:21:05.774 ************************************ 00:21:05.774 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:05.774 00:21:05.774 real 0m37.283s 00:21:05.774 user 1m40.305s 00:21:05.774 sys 0m12.384s 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:05.775 ************************************ 00:21:05.775 END TEST nvmf_shutdown 00:21:05.775 ************************************ 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:05.775 ************************************ 00:21:05.775 START TEST nvmf_nsid 00:21:05.775 ************************************ 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:05.775 * Looking for test storage... 00:21:05.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:05.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.775 --rc genhtml_branch_coverage=1 00:21:05.775 --rc genhtml_function_coverage=1 00:21:05.775 --rc genhtml_legend=1 00:21:05.775 --rc geninfo_all_blocks=1 00:21:05.775 --rc geninfo_unexecuted_blocks=1 00:21:05.775 00:21:05.775 ' 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:05.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.775 --rc genhtml_branch_coverage=1 00:21:05.775 --rc genhtml_function_coverage=1 00:21:05.775 --rc genhtml_legend=1 00:21:05.775 --rc geninfo_all_blocks=1 00:21:05.775 --rc geninfo_unexecuted_blocks=1 00:21:05.775 00:21:05.775 ' 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:05.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.775 --rc genhtml_branch_coverage=1 00:21:05.775 --rc genhtml_function_coverage=1 00:21:05.775 --rc genhtml_legend=1 00:21:05.775 --rc geninfo_all_blocks=1 00:21:05.775 --rc geninfo_unexecuted_blocks=1 00:21:05.775 00:21:05.775 ' 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:05.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.775 --rc genhtml_branch_coverage=1 00:21:05.775 --rc genhtml_function_coverage=1 00:21:05.775 --rc genhtml_legend=1 00:21:05.775 --rc geninfo_all_blocks=1 00:21:05.775 --rc geninfo_unexecuted_blocks=1 00:21:05.775 00:21:05.775 ' 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.775 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:05.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:05.776 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:07.679 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:07.679 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.679 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:07.680 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:07.680 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.680 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.680 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:07.680 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.680 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.680 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.680 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:07.680 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:07.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:21:07.680 00:21:07.680 --- 10.0.0.2 ping statistics --- 00:21:07.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.680 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:21:07.680 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:21:07.680 00:21:07.680 --- 10.0.0.1 ping statistics --- 00:21:07.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.680 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:21:07.680 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.680 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:07.680 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:07.680 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.680 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:07.680 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:07.680 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.680 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:07.680 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:07.938 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:07.938 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:07.938 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:07.938 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:07.938 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2444348 00:21:07.938 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:07.938 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2444348 00:21:07.938 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2444348 ']' 00:21:07.938 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.938 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.938 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.938 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.938 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:07.938 [2024-12-10 04:09:02.130755] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:07.938 [2024-12-10 04:09:02.130822] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.938 [2024-12-10 04:09:02.200643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.938 [2024-12-10 04:09:02.256362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.938 [2024-12-10 04:09:02.256415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.938 [2024-12-10 04:09:02.256436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.938 [2024-12-10 04:09:02.256447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.938 [2024-12-10 04:09:02.256457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.938 [2024-12-10 04:09:02.257052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2444374 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=4a033306-2d33-481f-be7e-46983a290436 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=4f53433a-a659-4dc9-bf38-1f99405d8fc3 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=b64941f5-a3d6-4821-b10f-ed7eb5414307 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:08.199 null0 00:21:08.199 null1 00:21:08.199 null2 00:21:08.199 [2024-12-10 04:09:02.435945] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.199 [2024-12-10 04:09:02.451683] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:08.199 [2024-12-10 04:09:02.451775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2444374 ] 00:21:08.199 [2024-12-10 04:09:02.460157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2444374 /var/tmp/tgt2.sock 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2444374 ']' 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:08.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.199 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:08.199 [2024-12-10 04:09:02.519278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.199 [2024-12-10 04:09:02.576869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.501 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.501 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:08.501 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:09.094 [2024-12-10 04:09:03.225224] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.094 [2024-12-10 04:09:03.241409] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:09.094 nvme0n1 nvme0n2 00:21:09.094 nvme1n1 00:21:09.094 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:09.094 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:09.094 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.663 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:09.663 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:09.663 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:09.663 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:09.663 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:09.663 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:09.663 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:09.663 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:09.663 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:09.663 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:09.663 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:09.663 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:09.663 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 4a033306-2d33-481f-be7e-46983a290436 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4a0333062d33481fbe7e46983a290436 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4A0333062D33481FBE7E46983A290436 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 4A0333062D33481FBE7E46983A290436 == \4\A\0\3\3\3\0\6\2\D\3\3\4\8\1\F\B\E\7\E\4\6\9\8\3\A\2\9\0\4\3\6 ]] 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 4f53433a-a659-4dc9-bf38-1f99405d8fc3 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:10.597 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:10.853 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4f53433aa6594dc9bf381f99405d8fc3 00:21:10.853 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4F53433AA6594DC9BF381F99405D8FC3 00:21:10.853 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 4F53433AA6594DC9BF381F99405D8FC3 == \4\F\5\3\4\3\3\A\A\6\5\9\4\D\C\9\B\F\3\8\1\F\9\9\4\0\5\D\8\F\C\3 ]] 00:21:10.853 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:10.853 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:10.853 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:10.853 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:10.853 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:10.853 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid b64941f5-a3d6-4821-b10f-ed7eb5414307 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b64941f5a3d64821b10fed7eb5414307 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B64941F5A3D64821B10FED7EB5414307 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ B64941F5A3D64821B10FED7EB5414307 == \B\6\4\9\4\1\F\5\A\3\D\6\4\8\2\1\B\1\0\F\E\D\7\E\B\5\4\1\4\3\0\7 ]] 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2444374 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2444374 ']' 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2444374 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.853 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2444374 00:21:11.112 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:11.112 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:11.112 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2444374' 00:21:11.112 killing process with pid 2444374 00:21:11.112 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2444374 00:21:11.112 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2444374 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:11.370 rmmod nvme_tcp 00:21:11.370 rmmod nvme_fabrics 00:21:11.370 rmmod nvme_keyring 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2444348 ']' 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2444348 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2444348 ']' 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2444348 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.370 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2444348 00:21:11.629 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.629 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.629 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2444348' 00:21:11.629 killing process with pid 2444348 00:21:11.629 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2444348 00:21:11.629 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2444348 00:21:11.629 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:11.629 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:11.629 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:11.629 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:11.629 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:11.629 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:11.629 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:11.629 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:11.629 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:11.629 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.629 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.629 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.165 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:14.165 00:21:14.165 real 0m8.311s 00:21:14.165 user 0m8.248s 00:21:14.165 sys 0m2.601s 00:21:14.165 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:14.165 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:14.165 ************************************ 00:21:14.165 END TEST nvmf_nsid 00:21:14.165 ************************************ 00:21:14.165 04:09:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:14.165 00:21:14.165 real 11m45.563s 00:21:14.165 user 28m2.207s 00:21:14.165 sys 2m43.886s 00:21:14.165 04:09:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:14.165 04:09:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:14.165 ************************************ 00:21:14.165 END TEST nvmf_target_extra 00:21:14.165 ************************************ 00:21:14.165 04:09:08 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:14.165 04:09:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:14.165 04:09:08 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.165 04:09:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:14.165 ************************************ 00:21:14.165 START TEST nvmf_host 00:21:14.165 ************************************ 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:14.165 * Looking for test storage... 00:21:14.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:14.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.165 --rc genhtml_branch_coverage=1 00:21:14.165 --rc genhtml_function_coverage=1 00:21:14.165 --rc genhtml_legend=1 00:21:14.165 --rc geninfo_all_blocks=1 00:21:14.165 --rc geninfo_unexecuted_blocks=1 00:21:14.165 00:21:14.165 ' 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:14.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.165 --rc genhtml_branch_coverage=1 00:21:14.165 --rc genhtml_function_coverage=1 00:21:14.165 --rc genhtml_legend=1 00:21:14.165 --rc geninfo_all_blocks=1 00:21:14.165 --rc geninfo_unexecuted_blocks=1 00:21:14.165 00:21:14.165 ' 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:14.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.165 --rc genhtml_branch_coverage=1 00:21:14.165 --rc genhtml_function_coverage=1 00:21:14.165 --rc genhtml_legend=1 00:21:14.165 --rc geninfo_all_blocks=1 00:21:14.165 --rc geninfo_unexecuted_blocks=1 00:21:14.165 00:21:14.165 ' 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:14.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.165 --rc genhtml_branch_coverage=1 00:21:14.165 --rc genhtml_function_coverage=1 00:21:14.165 --rc genhtml_legend=1 00:21:14.165 --rc geninfo_all_blocks=1 00:21:14.165 --rc geninfo_unexecuted_blocks=1 00:21:14.165 00:21:14.165 ' 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.165 04:09:08 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:14.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.166 ************************************ 00:21:14.166 START TEST nvmf_multicontroller 00:21:14.166 ************************************ 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:14.166 * Looking for test storage... 00:21:14.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:14.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.166 --rc genhtml_branch_coverage=1 00:21:14.166 --rc genhtml_function_coverage=1 00:21:14.166 --rc genhtml_legend=1 00:21:14.166 --rc geninfo_all_blocks=1 00:21:14.166 --rc geninfo_unexecuted_blocks=1 00:21:14.166 00:21:14.166 ' 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:14.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.166 --rc genhtml_branch_coverage=1 00:21:14.166 --rc genhtml_function_coverage=1 00:21:14.166 --rc genhtml_legend=1 00:21:14.166 --rc geninfo_all_blocks=1 00:21:14.166 --rc geninfo_unexecuted_blocks=1 00:21:14.166 00:21:14.166 ' 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:14.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.166 --rc genhtml_branch_coverage=1 00:21:14.166 --rc genhtml_function_coverage=1 00:21:14.166 --rc genhtml_legend=1 00:21:14.166 --rc geninfo_all_blocks=1 00:21:14.166 --rc geninfo_unexecuted_blocks=1 00:21:14.166 00:21:14.166 ' 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:14.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.166 --rc genhtml_branch_coverage=1 00:21:14.166 --rc genhtml_function_coverage=1 00:21:14.166 --rc genhtml_legend=1 00:21:14.166 --rc geninfo_all_blocks=1 00:21:14.166 --rc geninfo_unexecuted_blocks=1 00:21:14.166 00:21:14.166 ' 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.166 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:14.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:14.167 04:09:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:16.700 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:16.700 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:16.700 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:16.700 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:16.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:21:16.700 00:21:16.700 --- 10.0.0.2 ping statistics --- 00:21:16.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.700 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:21:16.700 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:21:16.700 00:21:16.700 --- 10.0.0.1 ping statistics --- 00:21:16.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.700 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2447317 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2447317 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2447317 ']' 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.701 [2024-12-10 04:09:10.692457] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:16.701 [2024-12-10 04:09:10.692560] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.701 [2024-12-10 04:09:10.765147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:16.701 [2024-12-10 04:09:10.819898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.701 [2024-12-10 04:09:10.819957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.701 [2024-12-10 04:09:10.819978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.701 [2024-12-10 04:09:10.819995] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.701 [2024-12-10 04:09:10.820009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.701 [2024-12-10 04:09:10.821634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.701 [2024-12-10 04:09:10.821660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:16.701 [2024-12-10 04:09:10.821664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.701 [2024-12-10 04:09:10.972190] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.701 04:09:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.701 Malloc0 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.701 [2024-12-10 04:09:11.035377] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.701 [2024-12-10 04:09:11.043254] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.701 Malloc1 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.701 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.959 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.959 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:16.959 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.959 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.959 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.960 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:16.960 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.960 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.960 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.960 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2447407 00:21:16.960 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:16.960 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:16.960 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2447407 /var/tmp/bdevperf.sock 00:21:16.960 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2447407 ']' 00:21:16.960 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.960 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.960 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.960 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.960 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.218 NVMe0n1 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.218 1 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.218 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.476 request: 00:21:17.476 { 00:21:17.476 "name": "NVMe0", 00:21:17.476 "trtype": "tcp", 00:21:17.476 "traddr": "10.0.0.2", 00:21:17.476 "adrfam": "ipv4", 00:21:17.476 "trsvcid": "4420", 00:21:17.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.476 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:17.476 "hostaddr": "10.0.0.1", 00:21:17.476 "prchk_reftag": false, 00:21:17.476 "prchk_guard": false, 00:21:17.476 "hdgst": false, 00:21:17.477 "ddgst": false, 00:21:17.477 "allow_unrecognized_csi": false, 00:21:17.477 "method": "bdev_nvme_attach_controller", 00:21:17.477 "req_id": 1 00:21:17.477 } 00:21:17.477 Got JSON-RPC error response 00:21:17.477 response: 00:21:17.477 { 00:21:17.477 "code": -114, 00:21:17.477 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:17.477 } 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.477 request: 00:21:17.477 { 00:21:17.477 "name": "NVMe0", 00:21:17.477 "trtype": "tcp", 00:21:17.477 "traddr": "10.0.0.2", 00:21:17.477 "adrfam": "ipv4", 00:21:17.477 "trsvcid": "4420", 00:21:17.477 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:17.477 "hostaddr": "10.0.0.1", 00:21:17.477 "prchk_reftag": false, 00:21:17.477 "prchk_guard": false, 00:21:17.477 "hdgst": false, 00:21:17.477 "ddgst": false, 00:21:17.477 "allow_unrecognized_csi": false, 00:21:17.477 "method": "bdev_nvme_attach_controller", 00:21:17.477 "req_id": 1 00:21:17.477 } 00:21:17.477 Got JSON-RPC error response 00:21:17.477 response: 00:21:17.477 { 00:21:17.477 "code": -114, 00:21:17.477 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:17.477 } 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.477 request: 00:21:17.477 { 00:21:17.477 "name": "NVMe0", 00:21:17.477 "trtype": "tcp", 00:21:17.477 "traddr": "10.0.0.2", 00:21:17.477 "adrfam": "ipv4", 00:21:17.477 "trsvcid": "4420", 00:21:17.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.477 "hostaddr": "10.0.0.1", 00:21:17.477 "prchk_reftag": false, 00:21:17.477 "prchk_guard": false, 00:21:17.477 "hdgst": false, 00:21:17.477 "ddgst": false, 00:21:17.477 "multipath": "disable", 00:21:17.477 "allow_unrecognized_csi": false, 00:21:17.477 "method": "bdev_nvme_attach_controller", 00:21:17.477 "req_id": 1 00:21:17.477 } 00:21:17.477 Got JSON-RPC error response 00:21:17.477 response: 00:21:17.477 { 00:21:17.477 "code": -114, 00:21:17.477 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:17.477 } 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.477 request: 00:21:17.477 { 00:21:17.477 "name": "NVMe0", 00:21:17.477 "trtype": "tcp", 00:21:17.477 "traddr": "10.0.0.2", 00:21:17.477 "adrfam": "ipv4", 00:21:17.477 "trsvcid": "4420", 00:21:17.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.477 "hostaddr": "10.0.0.1", 00:21:17.477 "prchk_reftag": false, 00:21:17.477 "prchk_guard": false, 00:21:17.477 "hdgst": false, 00:21:17.477 "ddgst": false, 00:21:17.477 "multipath": "failover", 00:21:17.477 "allow_unrecognized_csi": false, 00:21:17.477 "method": "bdev_nvme_attach_controller", 00:21:17.477 "req_id": 1 00:21:17.477 } 00:21:17.477 Got JSON-RPC error response 00:21:17.477 response: 00:21:17.477 { 00:21:17.477 "code": -114, 00:21:17.477 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:17.477 } 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.477 NVMe0n1 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.477 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.477 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.478 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:17.478 04:09:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:18.856 { 00:21:18.856 "results": [ 00:21:18.856 { 00:21:18.856 "job": "NVMe0n1", 00:21:18.856 "core_mask": "0x1", 00:21:18.856 "workload": "write", 00:21:18.856 "status": "finished", 00:21:18.856 "queue_depth": 128, 00:21:18.856 "io_size": 4096, 00:21:18.856 "runtime": 1.006267, 00:21:18.856 "iops": 18428.50853699863, 00:21:18.856 "mibps": 71.9863614726509, 00:21:18.856 "io_failed": 0, 00:21:18.856 "io_timeout": 0, 00:21:18.856 "avg_latency_us": 6934.134800754162, 00:21:18.856 "min_latency_us": 5995.3303703703705, 00:21:18.856 "max_latency_us": 13689.742222222223 00:21:18.856 } 00:21:18.856 ], 00:21:18.856 "core_count": 1 00:21:18.856 } 00:21:18.856 04:09:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:18.856 04:09:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.856 04:09:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:18.856 04:09:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.856 04:09:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:18.856 04:09:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2447407 00:21:18.856 04:09:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2447407 ']' 00:21:18.856 04:09:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2447407 00:21:18.856 04:09:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:18.856 04:09:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.856 04:09:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2447407 00:21:18.856 04:09:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:18.856 04:09:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:18.856 04:09:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2447407' 00:21:18.856 killing process with pid 2447407 00:21:18.856 04:09:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2447407 00:21:18.856 04:09:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2447407 00:21:18.856 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:18.856 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.856 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:18.856 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.856 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:18.856 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.856 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:18.857 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.857 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:18.857 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:18.857 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:18.857 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:18.857 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:18.857 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:18.857 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:18.857 [2024-12-10 04:09:11.150186] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:18.857 [2024-12-10 04:09:11.150281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2447407 ] 00:21:18.857 [2024-12-10 04:09:11.221115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.857 [2024-12-10 04:09:11.279475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.857 [2024-12-10 04:09:11.783472] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 1f25ac2c-72a3-45a2-82d6-f68ebf31847b already exists 00:21:18.857 [2024-12-10 04:09:11.783511] bdev.c:8150:bdev_register: *ERROR*: Unable to add uuid:1f25ac2c-72a3-45a2-82d6-f68ebf31847b alias for bdev NVMe1n1 00:21:18.857 [2024-12-10 04:09:11.783541] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:18.857 Running I/O for 1 seconds... 00:21:18.857 18416.00 IOPS, 71.94 MiB/s 00:21:18.857 Latency(us) 00:21:18.857 [2024-12-10T03:09:13.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.857 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:18.857 NVMe0n1 : 1.01 18428.51 71.99 0.00 0.00 6934.13 5995.33 13689.74 00:21:18.857 [2024-12-10T03:09:13.246Z] =================================================================================================================== 00:21:18.857 [2024-12-10T03:09:13.246Z] Total : 18428.51 71.99 0.00 0.00 6934.13 5995.33 13689.74 00:21:18.857 Received shutdown signal, test time was about 1.000000 seconds 00:21:18.857 00:21:18.857 Latency(us) 00:21:18.857 [2024-12-10T03:09:13.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.857 [2024-12-10T03:09:13.246Z] =================================================================================================================== 00:21:18.857 [2024-12-10T03:09:13.246Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.857 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:18.857 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:18.857 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:18.857 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:18.857 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:18.857 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:18.857 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:18.857 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:18.857 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:18.857 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:18.857 rmmod nvme_tcp 00:21:19.117 rmmod nvme_fabrics 00:21:19.117 rmmod nvme_keyring 00:21:19.117 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:19.117 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:19.117 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:19.117 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2447317 ']' 00:21:19.117 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2447317 00:21:19.117 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2447317 ']' 00:21:19.117 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2447317 00:21:19.117 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:19.117 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.117 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2447317 00:21:19.117 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:19.117 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:19.117 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2447317' 00:21:19.117 killing process with pid 2447317 00:21:19.117 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2447317 00:21:19.117 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2447317 00:21:19.375 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:19.375 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:19.375 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:19.375 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:19.375 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:19.375 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:19.375 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:19.375 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:19.375 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:19.376 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.376 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.376 04:09:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.284 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:21.284 00:21:21.284 real 0m7.359s 00:21:21.284 user 0m11.268s 00:21:21.284 sys 0m2.298s 00:21:21.284 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.284 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:21.284 ************************************ 00:21:21.285 END TEST nvmf_multicontroller 00:21:21.285 ************************************ 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.543 ************************************ 00:21:21.543 START TEST nvmf_aer 00:21:21.543 ************************************ 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:21.543 * Looking for test storage... 00:21:21.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:21.543 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:21.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.544 --rc genhtml_branch_coverage=1 00:21:21.544 --rc genhtml_function_coverage=1 00:21:21.544 --rc genhtml_legend=1 00:21:21.544 --rc geninfo_all_blocks=1 00:21:21.544 --rc geninfo_unexecuted_blocks=1 00:21:21.544 00:21:21.544 ' 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:21.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.544 --rc genhtml_branch_coverage=1 00:21:21.544 --rc genhtml_function_coverage=1 00:21:21.544 --rc genhtml_legend=1 00:21:21.544 --rc geninfo_all_blocks=1 00:21:21.544 --rc geninfo_unexecuted_blocks=1 00:21:21.544 00:21:21.544 ' 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:21.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.544 --rc genhtml_branch_coverage=1 00:21:21.544 --rc genhtml_function_coverage=1 00:21:21.544 --rc genhtml_legend=1 00:21:21.544 --rc geninfo_all_blocks=1 00:21:21.544 --rc geninfo_unexecuted_blocks=1 00:21:21.544 00:21:21.544 ' 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:21.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.544 --rc genhtml_branch_coverage=1 00:21:21.544 --rc genhtml_function_coverage=1 00:21:21.544 --rc genhtml_legend=1 00:21:21.544 --rc geninfo_all_blocks=1 00:21:21.544 --rc geninfo_unexecuted_blocks=1 00:21:21.544 00:21:21.544 ' 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:21.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:21.544 04:09:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.079 04:09:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:24.079 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:24.079 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:24.079 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:24.079 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:24.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:21:24.079 00:21:24.079 --- 10.0.0.2 ping statistics --- 00:21:24.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.079 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:21:24.079 00:21:24.079 --- 10.0.0.1 ping statistics --- 00:21:24.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.079 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:24.079 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:24.080 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:24.080 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:24.080 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:24.080 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.080 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2449675 00:21:24.080 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:24.080 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2449675 00:21:24.080 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2449675 ']' 00:21:24.080 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.080 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.080 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.080 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.080 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.080 [2024-12-10 04:09:18.221806] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:24.080 [2024-12-10 04:09:18.221901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.080 [2024-12-10 04:09:18.297476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.080 [2024-12-10 04:09:18.355488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.080 [2024-12-10 04:09:18.355564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.080 [2024-12-10 04:09:18.355581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.080 [2024-12-10 04:09:18.355607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.080 [2024-12-10 04:09:18.355617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.080 [2024-12-10 04:09:18.357239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.080 [2024-12-10 04:09:18.357300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.080 [2024-12-10 04:09:18.357408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:24.080 [2024-12-10 04:09:18.357411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.339 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.339 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:24.339 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:24.339 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:24.339 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.339 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.339 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:24.339 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.339 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.339 [2024-12-10 04:09:18.496769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.339 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.339 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:24.339 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.339 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.339 Malloc0 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.340 [2024-12-10 04:09:18.570293] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.340 [ 00:21:24.340 { 00:21:24.340 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:24.340 "subtype": "Discovery", 00:21:24.340 "listen_addresses": [], 00:21:24.340 "allow_any_host": true, 00:21:24.340 "hosts": [] 00:21:24.340 }, 00:21:24.340 { 00:21:24.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.340 "subtype": "NVMe", 00:21:24.340 "listen_addresses": [ 00:21:24.340 { 00:21:24.340 "trtype": "TCP", 00:21:24.340 "adrfam": "IPv4", 00:21:24.340 "traddr": "10.0.0.2", 00:21:24.340 "trsvcid": "4420" 00:21:24.340 } 00:21:24.340 ], 00:21:24.340 "allow_any_host": true, 00:21:24.340 "hosts": [], 00:21:24.340 "serial_number": "SPDK00000000000001", 00:21:24.340 "model_number": "SPDK bdev Controller", 00:21:24.340 "max_namespaces": 2, 00:21:24.340 "min_cntlid": 1, 00:21:24.340 "max_cntlid": 65519, 00:21:24.340 "namespaces": [ 00:21:24.340 { 00:21:24.340 "nsid": 1, 00:21:24.340 "bdev_name": "Malloc0", 00:21:24.340 "name": "Malloc0", 00:21:24.340 "nguid": "2E0679B131034FA4BBE5DFD8906EA67E", 00:21:24.340 "uuid": "2e0679b1-3103-4fa4-bbe5-dfd8906ea67e" 00:21:24.340 } 00:21:24.340 ] 00:21:24.340 } 00:21:24.340 ] 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2449709 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:24.340 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:24.599 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:24.599 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:21:24.599 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:21:24.599 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:24.599 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:24.600 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 3 -lt 200 ']' 00:21:24.600 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=4 00:21:24.600 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:24.858 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:24.858 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:24.858 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:24.858 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:24.858 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.858 04:09:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.858 Malloc1 00:21:24.858 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.858 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:24.858 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.858 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.858 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.858 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:24.858 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.858 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.858 [ 00:21:24.858 { 00:21:24.858 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:24.858 "subtype": "Discovery", 00:21:24.858 "listen_addresses": [], 00:21:24.858 "allow_any_host": true, 00:21:24.858 "hosts": [] 00:21:24.858 }, 00:21:24.858 { 00:21:24.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.858 "subtype": "NVMe", 00:21:24.858 "listen_addresses": [ 00:21:24.858 { 00:21:24.858 "trtype": "TCP", 00:21:24.859 "adrfam": "IPv4", 00:21:24.859 "traddr": "10.0.0.2", 00:21:24.859 "trsvcid": "4420" 00:21:24.859 } 00:21:24.859 ], 00:21:24.859 "allow_any_host": true, 00:21:24.859 "hosts": [], 00:21:24.859 "serial_number": "SPDK00000000000001", 00:21:24.859 "model_number": "SPDK bdev Controller", 00:21:24.859 "max_namespaces": 2, 00:21:24.859 "min_cntlid": 1, 00:21:24.859 "max_cntlid": 65519, 00:21:24.859 "namespaces": [ 00:21:24.859 { 00:21:24.859 "nsid": 1, 00:21:24.859 "bdev_name": "Malloc0", 00:21:24.859 "name": "Malloc0", 00:21:24.859 "nguid": "2E0679B131034FA4BBE5DFD8906EA67E", 00:21:24.859 "uuid": "2e0679b1-3103-4fa4-bbe5-dfd8906ea67e" 00:21:24.859 }, 00:21:24.859 { 00:21:24.859 "nsid": 2, 00:21:24.859 "bdev_name": "Malloc1", 00:21:24.859 "name": "Malloc1", 00:21:24.859 "nguid": "E21B16F1331641A09F64DE95E2420B4F", 00:21:24.859 "uuid": "e21b16f1-3316-41a0-9f64-de95e2420b4f" 00:21:24.859 } 00:21:24.859 ] 00:21:24.859 } 00:21:24.859 ] 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2449709 00:21:24.859 Asynchronous Event Request test 00:21:24.859 Attaching to 10.0.0.2 00:21:24.859 Attached to 10.0.0.2 00:21:24.859 Registering asynchronous event callbacks... 00:21:24.859 Starting namespace attribute notice tests for all controllers... 00:21:24.859 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:24.859 aer_cb - Changed Namespace 00:21:24.859 Cleaning up... 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:24.859 rmmod nvme_tcp 00:21:24.859 rmmod nvme_fabrics 00:21:24.859 rmmod nvme_keyring 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2449675 ']' 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2449675 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2449675 ']' 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2449675 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.859 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2449675 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2449675' 00:21:25.117 killing process with pid 2449675 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2449675 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2449675 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.117 04:09:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:27.655 00:21:27.655 real 0m5.822s 00:21:27.655 user 0m5.205s 00:21:27.655 sys 0m2.098s 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:27.655 ************************************ 00:21:27.655 END TEST nvmf_aer 00:21:27.655 ************************************ 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.655 ************************************ 00:21:27.655 START TEST nvmf_async_init 00:21:27.655 ************************************ 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:27.655 * Looking for test storage... 00:21:27.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:27.655 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:27.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.656 --rc genhtml_branch_coverage=1 00:21:27.656 --rc genhtml_function_coverage=1 00:21:27.656 --rc genhtml_legend=1 00:21:27.656 --rc geninfo_all_blocks=1 00:21:27.656 --rc geninfo_unexecuted_blocks=1 00:21:27.656 00:21:27.656 ' 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:27.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.656 --rc genhtml_branch_coverage=1 00:21:27.656 --rc genhtml_function_coverage=1 00:21:27.656 --rc genhtml_legend=1 00:21:27.656 --rc geninfo_all_blocks=1 00:21:27.656 --rc geninfo_unexecuted_blocks=1 00:21:27.656 00:21:27.656 ' 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:27.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.656 --rc genhtml_branch_coverage=1 00:21:27.656 --rc genhtml_function_coverage=1 00:21:27.656 --rc genhtml_legend=1 00:21:27.656 --rc geninfo_all_blocks=1 00:21:27.656 --rc geninfo_unexecuted_blocks=1 00:21:27.656 00:21:27.656 ' 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:27.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.656 --rc genhtml_branch_coverage=1 00:21:27.656 --rc genhtml_function_coverage=1 00:21:27.656 --rc genhtml_legend=1 00:21:27.656 --rc geninfo_all_blocks=1 00:21:27.656 --rc geninfo_unexecuted_blocks=1 00:21:27.656 00:21:27.656 ' 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:27.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6c5e045780f847a6806af801a3749cf9 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:27.656 04:09:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:29.559 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:29.559 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:29.559 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:29.559 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:29.559 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:29.560 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:29.560 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:29.560 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.560 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:29.560 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:29.560 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:29.560 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:29.560 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:29.560 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:29.560 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:29.560 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:29.818 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:29.818 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:29.818 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:29.818 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:29.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:21:29.818 00:21:29.818 --- 10.0.0.2 ping statistics --- 00:21:29.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.818 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:21:29.818 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:29.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:21:29.818 00:21:29.818 --- 10.0.0.1 ping statistics --- 00:21:29.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.818 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:21:29.818 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.818 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:29.818 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:29.818 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.818 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:29.818 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:29.818 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.818 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:29.818 04:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:29.818 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:29.818 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:29.818 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:29.818 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:29.818 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2451775 00:21:29.818 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:29.818 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2451775 00:21:29.818 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2451775 ']' 00:21:29.818 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.818 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.818 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.818 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.818 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:29.818 [2024-12-10 04:09:24.057247] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:29.818 [2024-12-10 04:09:24.057332] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.818 [2024-12-10 04:09:24.129143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.818 [2024-12-10 04:09:24.182807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.818 [2024-12-10 04:09:24.182870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.818 [2024-12-10 04:09:24.182884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.818 [2024-12-10 04:09:24.182895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.818 [2024-12-10 04:09:24.182904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.818 [2024-12-10 04:09:24.183450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.078 [2024-12-10 04:09:24.323693] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.078 null0 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6c5e045780f847a6806af801a3749cf9 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.078 [2024-12-10 04:09:24.363974] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.078 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.338 nvme0n1 00:21:30.338 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.338 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:30.338 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.338 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.338 [ 00:21:30.338 { 00:21:30.338 "name": "nvme0n1", 00:21:30.338 "aliases": [ 00:21:30.338 "6c5e0457-80f8-47a6-806a-f801a3749cf9" 00:21:30.338 ], 00:21:30.338 "product_name": "NVMe disk", 00:21:30.338 "block_size": 512, 00:21:30.338 "num_blocks": 2097152, 00:21:30.338 "uuid": "6c5e0457-80f8-47a6-806a-f801a3749cf9", 00:21:30.338 "numa_id": 0, 00:21:30.338 "assigned_rate_limits": { 00:21:30.338 "rw_ios_per_sec": 0, 00:21:30.338 "rw_mbytes_per_sec": 0, 00:21:30.338 "r_mbytes_per_sec": 0, 00:21:30.338 "w_mbytes_per_sec": 0 00:21:30.338 }, 00:21:30.338 "claimed": false, 00:21:30.338 "zoned": false, 00:21:30.338 "supported_io_types": { 00:21:30.338 "read": true, 00:21:30.338 "write": true, 00:21:30.338 "unmap": false, 00:21:30.338 "flush": true, 00:21:30.338 "reset": true, 00:21:30.338 "nvme_admin": true, 00:21:30.338 "nvme_io": true, 00:21:30.338 "nvme_io_md": false, 00:21:30.338 "write_zeroes": true, 00:21:30.338 "zcopy": false, 00:21:30.338 "get_zone_info": false, 00:21:30.338 "zone_management": false, 00:21:30.338 "zone_append": false, 00:21:30.338 "compare": true, 00:21:30.338 "compare_and_write": true, 00:21:30.338 "abort": true, 00:21:30.338 "seek_hole": false, 00:21:30.338 "seek_data": false, 00:21:30.338 "copy": true, 00:21:30.338 "nvme_iov_md": false 00:21:30.338 }, 00:21:30.338 "memory_domains": [ 00:21:30.338 { 00:21:30.338 "dma_device_id": "system", 00:21:30.338 "dma_device_type": 1 00:21:30.338 } 00:21:30.338 ], 00:21:30.338 "driver_specific": { 00:21:30.338 "nvme": [ 00:21:30.338 { 00:21:30.338 "trid": { 00:21:30.339 "trtype": "TCP", 00:21:30.339 "adrfam": "IPv4", 00:21:30.339 "traddr": "10.0.0.2", 00:21:30.339 "trsvcid": "4420", 00:21:30.339 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:30.339 }, 00:21:30.339 "ctrlr_data": { 00:21:30.339 "cntlid": 1, 00:21:30.339 "vendor_id": "0x8086", 00:21:30.339 "model_number": "SPDK bdev Controller", 00:21:30.339 "serial_number": "00000000000000000000", 00:21:30.339 "firmware_revision": "25.01", 00:21:30.339 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:30.339 "oacs": { 00:21:30.339 "security": 0, 00:21:30.339 "format": 0, 00:21:30.339 "firmware": 0, 00:21:30.339 "ns_manage": 0 00:21:30.339 }, 00:21:30.339 "multi_ctrlr": true, 00:21:30.339 "ana_reporting": false 00:21:30.339 }, 00:21:30.339 "vs": { 00:21:30.339 "nvme_version": "1.3" 00:21:30.339 }, 00:21:30.339 "ns_data": { 00:21:30.339 "id": 1, 00:21:30.339 "can_share": true 00:21:30.339 } 00:21:30.339 } 00:21:30.339 ], 00:21:30.339 "mp_policy": "active_passive" 00:21:30.339 } 00:21:30.339 } 00:21:30.339 ] 00:21:30.339 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.339 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:30.339 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.339 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.339 [2024-12-10 04:09:24.613005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:30.339 [2024-12-10 04:09:24.613093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda0740 (9): Bad file descriptor 00:21:30.598 [2024-12-10 04:09:24.745676] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.598 [ 00:21:30.598 { 00:21:30.598 "name": "nvme0n1", 00:21:30.598 "aliases": [ 00:21:30.598 "6c5e0457-80f8-47a6-806a-f801a3749cf9" 00:21:30.598 ], 00:21:30.598 "product_name": "NVMe disk", 00:21:30.598 "block_size": 512, 00:21:30.598 "num_blocks": 2097152, 00:21:30.598 "uuid": "6c5e0457-80f8-47a6-806a-f801a3749cf9", 00:21:30.598 "numa_id": 0, 00:21:30.598 "assigned_rate_limits": { 00:21:30.598 "rw_ios_per_sec": 0, 00:21:30.598 "rw_mbytes_per_sec": 0, 00:21:30.598 "r_mbytes_per_sec": 0, 00:21:30.598 "w_mbytes_per_sec": 0 00:21:30.598 }, 00:21:30.598 "claimed": false, 00:21:30.598 "zoned": false, 00:21:30.598 "supported_io_types": { 00:21:30.598 "read": true, 00:21:30.598 "write": true, 00:21:30.598 "unmap": false, 00:21:30.598 "flush": true, 00:21:30.598 "reset": true, 00:21:30.598 "nvme_admin": true, 00:21:30.598 "nvme_io": true, 00:21:30.598 "nvme_io_md": false, 00:21:30.598 "write_zeroes": true, 00:21:30.598 "zcopy": false, 00:21:30.598 "get_zone_info": false, 00:21:30.598 "zone_management": false, 00:21:30.598 "zone_append": false, 00:21:30.598 "compare": true, 00:21:30.598 "compare_and_write": true, 00:21:30.598 "abort": true, 00:21:30.598 "seek_hole": false, 00:21:30.598 "seek_data": false, 00:21:30.598 "copy": true, 00:21:30.598 "nvme_iov_md": false 00:21:30.598 }, 00:21:30.598 "memory_domains": [ 00:21:30.598 { 00:21:30.598 "dma_device_id": "system", 00:21:30.598 "dma_device_type": 1 00:21:30.598 } 00:21:30.598 ], 00:21:30.598 "driver_specific": { 00:21:30.598 "nvme": [ 00:21:30.598 { 00:21:30.598 "trid": { 00:21:30.598 "trtype": "TCP", 00:21:30.598 "adrfam": "IPv4", 00:21:30.598 "traddr": "10.0.0.2", 00:21:30.598 "trsvcid": "4420", 00:21:30.598 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:30.598 }, 00:21:30.598 "ctrlr_data": { 00:21:30.598 "cntlid": 2, 00:21:30.598 "vendor_id": "0x8086", 00:21:30.598 "model_number": "SPDK bdev Controller", 00:21:30.598 "serial_number": "00000000000000000000", 00:21:30.598 "firmware_revision": "25.01", 00:21:30.598 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:30.598 "oacs": { 00:21:30.598 "security": 0, 00:21:30.598 "format": 0, 00:21:30.598 "firmware": 0, 00:21:30.598 "ns_manage": 0 00:21:30.598 }, 00:21:30.598 "multi_ctrlr": true, 00:21:30.598 "ana_reporting": false 00:21:30.598 }, 00:21:30.598 "vs": { 00:21:30.598 "nvme_version": "1.3" 00:21:30.598 }, 00:21:30.598 "ns_data": { 00:21:30.598 "id": 1, 00:21:30.598 "can_share": true 00:21:30.598 } 00:21:30.598 } 00:21:30.598 ], 00:21:30.598 "mp_policy": "active_passive" 00:21:30.598 } 00:21:30.598 } 00:21:30.598 ] 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.0Zd0TfHPHm 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.0Zd0TfHPHm 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.0Zd0TfHPHm 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.598 [2024-12-10 04:09:24.801646] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:30.598 [2024-12-10 04:09:24.801785] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.598 [2024-12-10 04:09:24.817684] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:30.598 nvme0n1 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.598 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.598 [ 00:21:30.598 { 00:21:30.598 "name": "nvme0n1", 00:21:30.598 "aliases": [ 00:21:30.598 "6c5e0457-80f8-47a6-806a-f801a3749cf9" 00:21:30.598 ], 00:21:30.598 "product_name": "NVMe disk", 00:21:30.598 "block_size": 512, 00:21:30.598 "num_blocks": 2097152, 00:21:30.599 "uuid": "6c5e0457-80f8-47a6-806a-f801a3749cf9", 00:21:30.599 "numa_id": 0, 00:21:30.599 "assigned_rate_limits": { 00:21:30.599 "rw_ios_per_sec": 0, 00:21:30.599 "rw_mbytes_per_sec": 0, 00:21:30.599 "r_mbytes_per_sec": 0, 00:21:30.599 "w_mbytes_per_sec": 0 00:21:30.599 }, 00:21:30.599 "claimed": false, 00:21:30.599 "zoned": false, 00:21:30.599 "supported_io_types": { 00:21:30.599 "read": true, 00:21:30.599 "write": true, 00:21:30.599 "unmap": false, 00:21:30.599 "flush": true, 00:21:30.599 "reset": true, 00:21:30.599 "nvme_admin": true, 00:21:30.599 "nvme_io": true, 00:21:30.599 "nvme_io_md": false, 00:21:30.599 "write_zeroes": true, 00:21:30.599 "zcopy": false, 00:21:30.599 "get_zone_info": false, 00:21:30.599 "zone_management": false, 00:21:30.599 "zone_append": false, 00:21:30.599 "compare": true, 00:21:30.599 "compare_and_write": true, 00:21:30.599 "abort": true, 00:21:30.599 "seek_hole": false, 00:21:30.599 "seek_data": false, 00:21:30.599 "copy": true, 00:21:30.599 "nvme_iov_md": false 00:21:30.599 }, 00:21:30.599 "memory_domains": [ 00:21:30.599 { 00:21:30.599 "dma_device_id": "system", 00:21:30.599 "dma_device_type": 1 00:21:30.599 } 00:21:30.599 ], 00:21:30.599 "driver_specific": { 00:21:30.599 "nvme": [ 00:21:30.599 { 00:21:30.599 "trid": { 00:21:30.599 "trtype": "TCP", 00:21:30.599 "adrfam": "IPv4", 00:21:30.599 "traddr": "10.0.0.2", 00:21:30.599 "trsvcid": "4421", 00:21:30.599 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:30.599 }, 00:21:30.599 "ctrlr_data": { 00:21:30.599 "cntlid": 3, 00:21:30.599 "vendor_id": "0x8086", 00:21:30.599 "model_number": "SPDK bdev Controller", 00:21:30.599 "serial_number": "00000000000000000000", 00:21:30.599 "firmware_revision": "25.01", 00:21:30.599 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:30.599 "oacs": { 00:21:30.599 "security": 0, 00:21:30.599 "format": 0, 00:21:30.599 "firmware": 0, 00:21:30.599 "ns_manage": 0 00:21:30.599 }, 00:21:30.599 "multi_ctrlr": true, 00:21:30.599 "ana_reporting": false 00:21:30.599 }, 00:21:30.599 "vs": { 00:21:30.599 "nvme_version": "1.3" 00:21:30.599 }, 00:21:30.599 "ns_data": { 00:21:30.599 "id": 1, 00:21:30.599 "can_share": true 00:21:30.599 } 00:21:30.599 } 00:21:30.599 ], 00:21:30.599 "mp_policy": "active_passive" 00:21:30.599 } 00:21:30.599 } 00:21:30.599 ] 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.0Zd0TfHPHm 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:30.599 rmmod nvme_tcp 00:21:30.599 rmmod nvme_fabrics 00:21:30.599 rmmod nvme_keyring 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2451775 ']' 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2451775 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2451775 ']' 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2451775 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.599 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2451775 00:21:30.855 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:30.855 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:30.855 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2451775' 00:21:30.855 killing process with pid 2451775 00:21:30.855 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2451775 00:21:30.855 04:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2451775 00:21:30.855 04:09:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:30.855 04:09:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:30.855 04:09:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:30.855 04:09:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:30.855 04:09:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:30.855 04:09:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:30.855 04:09:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:30.855 04:09:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:30.855 04:09:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:30.855 04:09:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.855 04:09:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.855 04:09:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:33.392 00:21:33.392 real 0m5.649s 00:21:33.392 user 0m2.108s 00:21:33.392 sys 0m1.952s 00:21:33.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:33.392 ************************************ 00:21:33.392 END TEST nvmf_async_init 00:21:33.392 ************************************ 00:21:33.392 04:09:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:33.392 04:09:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.393 ************************************ 00:21:33.393 START TEST dma 00:21:33.393 ************************************ 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:33.393 * Looking for test storage... 00:21:33.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:33.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.393 --rc genhtml_branch_coverage=1 00:21:33.393 --rc genhtml_function_coverage=1 00:21:33.393 --rc genhtml_legend=1 00:21:33.393 --rc geninfo_all_blocks=1 00:21:33.393 --rc geninfo_unexecuted_blocks=1 00:21:33.393 00:21:33.393 ' 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:33.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.393 --rc genhtml_branch_coverage=1 00:21:33.393 --rc genhtml_function_coverage=1 00:21:33.393 --rc genhtml_legend=1 00:21:33.393 --rc geninfo_all_blocks=1 00:21:33.393 --rc geninfo_unexecuted_blocks=1 00:21:33.393 00:21:33.393 ' 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:33.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.393 --rc genhtml_branch_coverage=1 00:21:33.393 --rc genhtml_function_coverage=1 00:21:33.393 --rc genhtml_legend=1 00:21:33.393 --rc geninfo_all_blocks=1 00:21:33.393 --rc geninfo_unexecuted_blocks=1 00:21:33.393 00:21:33.393 ' 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:33.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.393 --rc genhtml_branch_coverage=1 00:21:33.393 --rc genhtml_function_coverage=1 00:21:33.393 --rc genhtml_legend=1 00:21:33.393 --rc geninfo_all_blocks=1 00:21:33.393 --rc geninfo_unexecuted_blocks=1 00:21:33.393 00:21:33.393 ' 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:33.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:33.393 00:21:33.393 real 0m0.164s 00:21:33.393 user 0m0.113s 00:21:33.393 sys 0m0.061s 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:33.393 ************************************ 00:21:33.393 END TEST dma 00:21:33.393 ************************************ 00:21:33.393 04:09:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.394 ************************************ 00:21:33.394 START TEST nvmf_identify 00:21:33.394 ************************************ 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:33.394 * Looking for test storage... 00:21:33.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:33.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.394 --rc genhtml_branch_coverage=1 00:21:33.394 --rc genhtml_function_coverage=1 00:21:33.394 --rc genhtml_legend=1 00:21:33.394 --rc geninfo_all_blocks=1 00:21:33.394 --rc geninfo_unexecuted_blocks=1 00:21:33.394 00:21:33.394 ' 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:33.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.394 --rc genhtml_branch_coverage=1 00:21:33.394 --rc genhtml_function_coverage=1 00:21:33.394 --rc genhtml_legend=1 00:21:33.394 --rc geninfo_all_blocks=1 00:21:33.394 --rc geninfo_unexecuted_blocks=1 00:21:33.394 00:21:33.394 ' 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:33.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.394 --rc genhtml_branch_coverage=1 00:21:33.394 --rc genhtml_function_coverage=1 00:21:33.394 --rc genhtml_legend=1 00:21:33.394 --rc geninfo_all_blocks=1 00:21:33.394 --rc geninfo_unexecuted_blocks=1 00:21:33.394 00:21:33.394 ' 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:33.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.394 --rc genhtml_branch_coverage=1 00:21:33.394 --rc genhtml_function_coverage=1 00:21:33.394 --rc genhtml_legend=1 00:21:33.394 --rc geninfo_all_blocks=1 00:21:33.394 --rc geninfo_unexecuted_blocks=1 00:21:33.394 00:21:33.394 ' 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:33.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.394 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:33.395 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:33.395 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:33.395 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.395 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.395 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.395 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:33.395 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:33.395 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:33.395 04:09:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:35.930 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:35.930 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.930 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:35.931 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:35.931 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:35.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:21:35.931 00:21:35.931 --- 10.0.0.2 ping statistics --- 00:21:35.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.931 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:21:35.931 00:21:35.931 --- 10.0.0.1 ping statistics --- 00:21:35.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.931 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2453917 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2453917 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2453917 ']' 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.931 04:09:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:35.931 [2024-12-10 04:09:29.936463] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:35.931 [2024-12-10 04:09:29.936558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.931 [2024-12-10 04:09:30.009087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.931 [2024-12-10 04:09:30.073497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.931 [2024-12-10 04:09:30.073591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.931 [2024-12-10 04:09:30.073607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.931 [2024-12-10 04:09:30.073618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.931 [2024-12-10 04:09:30.073628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.931 [2024-12-10 04:09:30.075220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.931 [2024-12-10 04:09:30.075280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.931 [2024-12-10 04:09:30.075327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.931 [2024-12-10 04:09:30.075330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:35.931 [2024-12-10 04:09:30.205389] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:35.931 Malloc0 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.931 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:35.931 [2024-12-10 04:09:30.300457] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.932 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.932 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:35.932 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.932 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:36.193 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.193 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:36.193 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.193 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:36.193 [ 00:21:36.193 { 00:21:36.193 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:36.193 "subtype": "Discovery", 00:21:36.193 "listen_addresses": [ 00:21:36.193 { 00:21:36.193 "trtype": "TCP", 00:21:36.193 "adrfam": "IPv4", 00:21:36.193 "traddr": "10.0.0.2", 00:21:36.193 "trsvcid": "4420" 00:21:36.193 } 00:21:36.193 ], 00:21:36.193 "allow_any_host": true, 00:21:36.193 "hosts": [] 00:21:36.193 }, 00:21:36.193 { 00:21:36.193 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.193 "subtype": "NVMe", 00:21:36.193 "listen_addresses": [ 00:21:36.193 { 00:21:36.193 "trtype": "TCP", 00:21:36.193 "adrfam": "IPv4", 00:21:36.193 "traddr": "10.0.0.2", 00:21:36.193 "trsvcid": "4420" 00:21:36.193 } 00:21:36.193 ], 00:21:36.193 "allow_any_host": true, 00:21:36.193 "hosts": [], 00:21:36.193 "serial_number": "SPDK00000000000001", 00:21:36.193 "model_number": "SPDK bdev Controller", 00:21:36.193 "max_namespaces": 32, 00:21:36.193 "min_cntlid": 1, 00:21:36.193 "max_cntlid": 65519, 00:21:36.193 "namespaces": [ 00:21:36.193 { 00:21:36.193 "nsid": 1, 00:21:36.193 "bdev_name": "Malloc0", 00:21:36.193 "name": "Malloc0", 00:21:36.193 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:36.193 "eui64": "ABCDEF0123456789", 00:21:36.193 "uuid": "0c4272b2-f628-4a5e-9962-7d9906af0c51" 00:21:36.193 } 00:21:36.193 ] 00:21:36.193 } 00:21:36.193 ] 00:21:36.193 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.193 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:36.193 [2024-12-10 04:09:30.343133] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:36.193 [2024-12-10 04:09:30.343178] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2453943 ] 00:21:36.193 [2024-12-10 04:09:30.394223] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:36.193 [2024-12-10 04:09:30.394297] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:36.193 [2024-12-10 04:09:30.394308] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:36.193 [2024-12-10 04:09:30.394330] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:36.193 [2024-12-10 04:09:30.394346] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:36.193 [2024-12-10 04:09:30.398008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:36.193 [2024-12-10 04:09:30.398087] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x14e7690 0 00:21:36.193 [2024-12-10 04:09:30.398279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:36.193 [2024-12-10 04:09:30.398299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:36.193 [2024-12-10 04:09:30.398314] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:36.193 [2024-12-10 04:09:30.398322] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:36.193 [2024-12-10 04:09:30.398374] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.193 [2024-12-10 04:09:30.398388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.193 [2024-12-10 04:09:30.398396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e7690) 00:21:36.193 [2024-12-10 04:09:30.398416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:36.193 [2024-12-10 04:09:30.398447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549100, cid 0, qid 0 00:21:36.193 [2024-12-10 04:09:30.405565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.193 [2024-12-10 04:09:30.405591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.193 [2024-12-10 04:09:30.405599] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.193 [2024-12-10 04:09:30.405607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549100) on tqpair=0x14e7690 00:21:36.193 [2024-12-10 04:09:30.405628] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:36.193 [2024-12-10 04:09:30.405642] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:36.193 [2024-12-10 04:09:30.405652] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:36.193 [2024-12-10 04:09:30.405690] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.193 [2024-12-10 04:09:30.405699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.193 [2024-12-10 04:09:30.405706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e7690) 00:21:36.193 [2024-12-10 04:09:30.405717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.193 [2024-12-10 04:09:30.405741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549100, cid 0, qid 0 00:21:36.193 [2024-12-10 04:09:30.409574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.193 [2024-12-10 04:09:30.409592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.193 [2024-12-10 04:09:30.409600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.193 [2024-12-10 04:09:30.409607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549100) on tqpair=0x14e7690 00:21:36.193 [2024-12-10 04:09:30.409623] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:36.193 [2024-12-10 04:09:30.409638] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:36.193 [2024-12-10 04:09:30.409651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.193 [2024-12-10 04:09:30.409659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.193 [2024-12-10 04:09:30.409665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e7690) 00:21:36.193 [2024-12-10 04:09:30.409676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.193 [2024-12-10 04:09:30.409700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549100, cid 0, qid 0 00:21:36.193 [2024-12-10 04:09:30.409791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.193 [2024-12-10 04:09:30.409805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.193 [2024-12-10 04:09:30.409812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.193 [2024-12-10 04:09:30.409819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549100) on tqpair=0x14e7690 00:21:36.193 [2024-12-10 04:09:30.409829] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:36.193 [2024-12-10 04:09:30.409843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:36.193 [2024-12-10 04:09:30.409856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.193 [2024-12-10 04:09:30.409864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.193 [2024-12-10 04:09:30.409870] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e7690) 00:21:36.193 [2024-12-10 04:09:30.409881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.193 [2024-12-10 04:09:30.409907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549100, cid 0, qid 0 00:21:36.193 [2024-12-10 04:09:30.409990] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.193 [2024-12-10 04:09:30.410004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.193 [2024-12-10 04:09:30.410011] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.410018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549100) on tqpair=0x14e7690 00:21:36.194 [2024-12-10 04:09:30.410027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:36.194 [2024-12-10 04:09:30.410044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.410053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.410059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e7690) 00:21:36.194 [2024-12-10 04:09:30.410070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.194 [2024-12-10 04:09:30.410091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549100, cid 0, qid 0 00:21:36.194 [2024-12-10 04:09:30.410172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.194 [2024-12-10 04:09:30.410184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.194 [2024-12-10 04:09:30.410191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.410198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549100) on tqpair=0x14e7690 00:21:36.194 [2024-12-10 04:09:30.410207] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:36.194 [2024-12-10 04:09:30.410216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:36.194 [2024-12-10 04:09:30.410228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:36.194 [2024-12-10 04:09:30.410338] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:36.194 [2024-12-10 04:09:30.410346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:36.194 [2024-12-10 04:09:30.410362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.410370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.410377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e7690) 00:21:36.194 [2024-12-10 04:09:30.410387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.194 [2024-12-10 04:09:30.410408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549100, cid 0, qid 0 00:21:36.194 [2024-12-10 04:09:30.410513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.194 [2024-12-10 04:09:30.410525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.194 [2024-12-10 04:09:30.410532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.410539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549100) on tqpair=0x14e7690 00:21:36.194 [2024-12-10 04:09:30.410558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:36.194 [2024-12-10 04:09:30.410576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.410585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.410596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e7690) 00:21:36.194 [2024-12-10 04:09:30.410607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.194 [2024-12-10 04:09:30.410629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549100, cid 0, qid 0 00:21:36.194 [2024-12-10 04:09:30.410708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.194 [2024-12-10 04:09:30.410722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.194 [2024-12-10 04:09:30.410729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.410736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549100) on tqpair=0x14e7690 00:21:36.194 [2024-12-10 04:09:30.410744] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:36.194 [2024-12-10 04:09:30.410752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:36.194 [2024-12-10 04:09:30.410766] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:36.194 [2024-12-10 04:09:30.410784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:36.194 [2024-12-10 04:09:30.410802] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.410810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e7690) 00:21:36.194 [2024-12-10 04:09:30.410821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.194 [2024-12-10 04:09:30.410843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549100, cid 0, qid 0 00:21:36.194 [2024-12-10 04:09:30.410964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.194 [2024-12-10 04:09:30.410978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.194 [2024-12-10 04:09:30.410986] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.410993] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e7690): datao=0, datal=4096, cccid=0 00:21:36.194 [2024-12-10 04:09:30.411001] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1549100) on tqpair(0x14e7690): expected_datao=0, payload_size=4096 00:21:36.194 [2024-12-10 04:09:30.411009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.411026] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.411036] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.411048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.194 [2024-12-10 04:09:30.411058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.194 [2024-12-10 04:09:30.411065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.411072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549100) on tqpair=0x14e7690 00:21:36.194 [2024-12-10 04:09:30.411090] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:36.194 [2024-12-10 04:09:30.411100] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:36.194 [2024-12-10 04:09:30.411108] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:36.194 [2024-12-10 04:09:30.411117] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:36.194 [2024-12-10 04:09:30.411125] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:36.194 [2024-12-10 04:09:30.411136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:36.194 [2024-12-10 04:09:30.411152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:36.194 [2024-12-10 04:09:30.411165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.411173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.411179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e7690) 00:21:36.194 [2024-12-10 04:09:30.411190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:36.194 [2024-12-10 04:09:30.411212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549100, cid 0, qid 0 00:21:36.194 [2024-12-10 04:09:30.411306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.194 [2024-12-10 04:09:30.411320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.194 [2024-12-10 04:09:30.411327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.411336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549100) on tqpair=0x14e7690 00:21:36.194 [2024-12-10 04:09:30.411351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.411359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.411365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e7690) 00:21:36.194 [2024-12-10 04:09:30.411375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.194 [2024-12-10 04:09:30.411385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.411392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.411398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x14e7690) 00:21:36.194 [2024-12-10 04:09:30.411407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.194 [2024-12-10 04:09:30.411417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.411424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.411430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x14e7690) 00:21:36.194 [2024-12-10 04:09:30.411439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.194 [2024-12-10 04:09:30.411448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.411455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.411461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e7690) 00:21:36.194 [2024-12-10 04:09:30.411470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.194 [2024-12-10 04:09:30.411483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:36.194 [2024-12-10 04:09:30.411503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:36.194 [2024-12-10 04:09:30.411516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.194 [2024-12-10 04:09:30.411524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e7690) 00:21:36.194 [2024-12-10 04:09:30.411534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.194 [2024-12-10 04:09:30.411566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549100, cid 0, qid 0 00:21:36.194 [2024-12-10 04:09:30.411583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549280, cid 1, qid 0 00:21:36.195 [2024-12-10 04:09:30.411591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549400, cid 2, qid 0 00:21:36.195 [2024-12-10 04:09:30.411599] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549580, cid 3, qid 0 00:21:36.195 [2024-12-10 04:09:30.411607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549700, cid 4, qid 0 00:21:36.195 [2024-12-10 04:09:30.411716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.195 [2024-12-10 04:09:30.411728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.195 [2024-12-10 04:09:30.411734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.411741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549700) on tqpair=0x14e7690 00:21:36.195 [2024-12-10 04:09:30.411751] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:36.195 [2024-12-10 04:09:30.411760] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:36.195 [2024-12-10 04:09:30.411777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.411787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e7690) 00:21:36.195 [2024-12-10 04:09:30.411797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.195 [2024-12-10 04:09:30.411818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549700, cid 4, qid 0 00:21:36.195 [2024-12-10 04:09:30.411913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.195 [2024-12-10 04:09:30.411928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.195 [2024-12-10 04:09:30.411935] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.411941] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e7690): datao=0, datal=4096, cccid=4 00:21:36.195 [2024-12-10 04:09:30.411949] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1549700) on tqpair(0x14e7690): expected_datao=0, payload_size=4096 00:21:36.195 [2024-12-10 04:09:30.411956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.411972] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.411981] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.455556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.195 [2024-12-10 04:09:30.455576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.195 [2024-12-10 04:09:30.455584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.455591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549700) on tqpair=0x14e7690 00:21:36.195 [2024-12-10 04:09:30.455614] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:36.195 [2024-12-10 04:09:30.455657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.455668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e7690) 00:21:36.195 [2024-12-10 04:09:30.455679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.195 [2024-12-10 04:09:30.455692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.455699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.455706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14e7690) 00:21:36.195 [2024-12-10 04:09:30.455715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.195 [2024-12-10 04:09:30.455747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549700, cid 4, qid 0 00:21:36.195 [2024-12-10 04:09:30.455760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549880, cid 5, qid 0 00:21:36.195 [2024-12-10 04:09:30.455907] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.195 [2024-12-10 04:09:30.455920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.195 [2024-12-10 04:09:30.455927] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.455933] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e7690): datao=0, datal=1024, cccid=4 00:21:36.195 [2024-12-10 04:09:30.455941] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1549700) on tqpair(0x14e7690): expected_datao=0, payload_size=1024 00:21:36.195 [2024-12-10 04:09:30.455949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.455959] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.455966] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.455975] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.195 [2024-12-10 04:09:30.455984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.195 [2024-12-10 04:09:30.455991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.455998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549880) on tqpair=0x14e7690 00:21:36.195 [2024-12-10 04:09:30.496667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.195 [2024-12-10 04:09:30.496686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.195 [2024-12-10 04:09:30.496694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.496701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549700) on tqpair=0x14e7690 00:21:36.195 [2024-12-10 04:09:30.496719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.496729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e7690) 00:21:36.195 [2024-12-10 04:09:30.496740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.195 [2024-12-10 04:09:30.496769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549700, cid 4, qid 0 00:21:36.195 [2024-12-10 04:09:30.496892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.195 [2024-12-10 04:09:30.496907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.195 [2024-12-10 04:09:30.496915] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.496921] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e7690): datao=0, datal=3072, cccid=4 00:21:36.195 [2024-12-10 04:09:30.496929] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1549700) on tqpair(0x14e7690): expected_datao=0, payload_size=3072 00:21:36.195 [2024-12-10 04:09:30.496936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.496947] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.496954] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.496970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.195 [2024-12-10 04:09:30.496980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.195 [2024-12-10 04:09:30.496987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.496994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549700) on tqpair=0x14e7690 00:21:36.195 [2024-12-10 04:09:30.497009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.497018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e7690) 00:21:36.195 [2024-12-10 04:09:30.497028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.195 [2024-12-10 04:09:30.497061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549700, cid 4, qid 0 00:21:36.195 [2024-12-10 04:09:30.497155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.195 [2024-12-10 04:09:30.497168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.195 [2024-12-10 04:09:30.497175] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.497181] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e7690): datao=0, datal=8, cccid=4 00:21:36.195 [2024-12-10 04:09:30.497189] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1549700) on tqpair(0x14e7690): expected_datao=0, payload_size=8 00:21:36.195 [2024-12-10 04:09:30.497197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.497206] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.497213] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.537628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.195 [2024-12-10 04:09:30.537648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.195 [2024-12-10 04:09:30.537655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.195 [2024-12-10 04:09:30.537662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549700) on tqpair=0x14e7690 00:21:36.195 ===================================================== 00:21:36.195 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:36.195 ===================================================== 00:21:36.195 Controller Capabilities/Features 00:21:36.195 ================================ 00:21:36.195 Vendor ID: 0000 00:21:36.195 Subsystem Vendor ID: 0000 00:21:36.195 Serial Number: .................... 00:21:36.195 Model Number: ........................................ 00:21:36.195 Firmware Version: 25.01 00:21:36.195 Recommended Arb Burst: 0 00:21:36.195 IEEE OUI Identifier: 00 00 00 00:21:36.195 Multi-path I/O 00:21:36.195 May have multiple subsystem ports: No 00:21:36.195 May have multiple controllers: No 00:21:36.195 Associated with SR-IOV VF: No 00:21:36.195 Max Data Transfer Size: 131072 00:21:36.195 Max Number of Namespaces: 0 00:21:36.195 Max Number of I/O Queues: 1024 00:21:36.195 NVMe Specification Version (VS): 1.3 00:21:36.195 NVMe Specification Version (Identify): 1.3 00:21:36.195 Maximum Queue Entries: 128 00:21:36.195 Contiguous Queues Required: Yes 00:21:36.195 Arbitration Mechanisms Supported 00:21:36.195 Weighted Round Robin: Not Supported 00:21:36.195 Vendor Specific: Not Supported 00:21:36.195 Reset Timeout: 15000 ms 00:21:36.195 Doorbell Stride: 4 bytes 00:21:36.195 NVM Subsystem Reset: Not Supported 00:21:36.195 Command Sets Supported 00:21:36.195 NVM Command Set: Supported 00:21:36.195 Boot Partition: Not Supported 00:21:36.195 Memory Page Size Minimum: 4096 bytes 00:21:36.195 Memory Page Size Maximum: 4096 bytes 00:21:36.195 Persistent Memory Region: Not Supported 00:21:36.195 Optional Asynchronous Events Supported 00:21:36.195 Namespace Attribute Notices: Not Supported 00:21:36.195 Firmware Activation Notices: Not Supported 00:21:36.195 ANA Change Notices: Not Supported 00:21:36.195 PLE Aggregate Log Change Notices: Not Supported 00:21:36.195 LBA Status Info Alert Notices: Not Supported 00:21:36.195 EGE Aggregate Log Change Notices: Not Supported 00:21:36.195 Normal NVM Subsystem Shutdown event: Not Supported 00:21:36.196 Zone Descriptor Change Notices: Not Supported 00:21:36.196 Discovery Log Change Notices: Supported 00:21:36.196 Controller Attributes 00:21:36.196 128-bit Host Identifier: Not Supported 00:21:36.196 Non-Operational Permissive Mode: Not Supported 00:21:36.196 NVM Sets: Not Supported 00:21:36.196 Read Recovery Levels: Not Supported 00:21:36.196 Endurance Groups: Not Supported 00:21:36.196 Predictable Latency Mode: Not Supported 00:21:36.196 Traffic Based Keep ALive: Not Supported 00:21:36.196 Namespace Granularity: Not Supported 00:21:36.196 SQ Associations: Not Supported 00:21:36.196 UUID List: Not Supported 00:21:36.196 Multi-Domain Subsystem: Not Supported 00:21:36.196 Fixed Capacity Management: Not Supported 00:21:36.196 Variable Capacity Management: Not Supported 00:21:36.196 Delete Endurance Group: Not Supported 00:21:36.196 Delete NVM Set: Not Supported 00:21:36.196 Extended LBA Formats Supported: Not Supported 00:21:36.196 Flexible Data Placement Supported: Not Supported 00:21:36.196 00:21:36.196 Controller Memory Buffer Support 00:21:36.196 ================================ 00:21:36.196 Supported: No 00:21:36.196 00:21:36.196 Persistent Memory Region Support 00:21:36.196 ================================ 00:21:36.196 Supported: No 00:21:36.196 00:21:36.196 Admin Command Set Attributes 00:21:36.196 ============================ 00:21:36.196 Security Send/Receive: Not Supported 00:21:36.196 Format NVM: Not Supported 00:21:36.196 Firmware Activate/Download: Not Supported 00:21:36.196 Namespace Management: Not Supported 00:21:36.196 Device Self-Test: Not Supported 00:21:36.196 Directives: Not Supported 00:21:36.196 NVMe-MI: Not Supported 00:21:36.196 Virtualization Management: Not Supported 00:21:36.196 Doorbell Buffer Config: Not Supported 00:21:36.196 Get LBA Status Capability: Not Supported 00:21:36.196 Command & Feature Lockdown Capability: Not Supported 00:21:36.196 Abort Command Limit: 1 00:21:36.196 Async Event Request Limit: 4 00:21:36.196 Number of Firmware Slots: N/A 00:21:36.196 Firmware Slot 1 Read-Only: N/A 00:21:36.196 Firmware Activation Without Reset: N/A 00:21:36.196 Multiple Update Detection Support: N/A 00:21:36.196 Firmware Update Granularity: No Information Provided 00:21:36.196 Per-Namespace SMART Log: No 00:21:36.196 Asymmetric Namespace Access Log Page: Not Supported 00:21:36.196 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:36.196 Command Effects Log Page: Not Supported 00:21:36.196 Get Log Page Extended Data: Supported 00:21:36.196 Telemetry Log Pages: Not Supported 00:21:36.196 Persistent Event Log Pages: Not Supported 00:21:36.196 Supported Log Pages Log Page: May Support 00:21:36.196 Commands Supported & Effects Log Page: Not Supported 00:21:36.196 Feature Identifiers & Effects Log Page:May Support 00:21:36.196 NVMe-MI Commands & Effects Log Page: May Support 00:21:36.196 Data Area 4 for Telemetry Log: Not Supported 00:21:36.196 Error Log Page Entries Supported: 128 00:21:36.196 Keep Alive: Not Supported 00:21:36.196 00:21:36.196 NVM Command Set Attributes 00:21:36.196 ========================== 00:21:36.196 Submission Queue Entry Size 00:21:36.196 Max: 1 00:21:36.196 Min: 1 00:21:36.196 Completion Queue Entry Size 00:21:36.196 Max: 1 00:21:36.196 Min: 1 00:21:36.196 Number of Namespaces: 0 00:21:36.196 Compare Command: Not Supported 00:21:36.196 Write Uncorrectable Command: Not Supported 00:21:36.196 Dataset Management Command: Not Supported 00:21:36.196 Write Zeroes Command: Not Supported 00:21:36.196 Set Features Save Field: Not Supported 00:21:36.196 Reservations: Not Supported 00:21:36.196 Timestamp: Not Supported 00:21:36.196 Copy: Not Supported 00:21:36.196 Volatile Write Cache: Not Present 00:21:36.196 Atomic Write Unit (Normal): 1 00:21:36.196 Atomic Write Unit (PFail): 1 00:21:36.196 Atomic Compare & Write Unit: 1 00:21:36.196 Fused Compare & Write: Supported 00:21:36.196 Scatter-Gather List 00:21:36.196 SGL Command Set: Supported 00:21:36.196 SGL Keyed: Supported 00:21:36.196 SGL Bit Bucket Descriptor: Not Supported 00:21:36.196 SGL Metadata Pointer: Not Supported 00:21:36.196 Oversized SGL: Not Supported 00:21:36.196 SGL Metadata Address: Not Supported 00:21:36.196 SGL Offset: Supported 00:21:36.196 Transport SGL Data Block: Not Supported 00:21:36.196 Replay Protected Memory Block: Not Supported 00:21:36.196 00:21:36.196 Firmware Slot Information 00:21:36.196 ========================= 00:21:36.196 Active slot: 0 00:21:36.196 00:21:36.196 00:21:36.196 Error Log 00:21:36.196 ========= 00:21:36.196 00:21:36.196 Active Namespaces 00:21:36.196 ================= 00:21:36.196 Discovery Log Page 00:21:36.196 ================== 00:21:36.196 Generation Counter: 2 00:21:36.196 Number of Records: 2 00:21:36.196 Record Format: 0 00:21:36.196 00:21:36.196 Discovery Log Entry 0 00:21:36.196 ---------------------- 00:21:36.196 Transport Type: 3 (TCP) 00:21:36.196 Address Family: 1 (IPv4) 00:21:36.196 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:36.196 Entry Flags: 00:21:36.196 Duplicate Returned Information: 1 00:21:36.196 Explicit Persistent Connection Support for Discovery: 1 00:21:36.196 Transport Requirements: 00:21:36.196 Secure Channel: Not Required 00:21:36.196 Port ID: 0 (0x0000) 00:21:36.196 Controller ID: 65535 (0xffff) 00:21:36.196 Admin Max SQ Size: 128 00:21:36.196 Transport Service Identifier: 4420 00:21:36.196 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:36.196 Transport Address: 10.0.0.2 00:21:36.196 Discovery Log Entry 1 00:21:36.196 ---------------------- 00:21:36.196 Transport Type: 3 (TCP) 00:21:36.196 Address Family: 1 (IPv4) 00:21:36.196 Subsystem Type: 2 (NVM Subsystem) 00:21:36.196 Entry Flags: 00:21:36.196 Duplicate Returned Information: 0 00:21:36.196 Explicit Persistent Connection Support for Discovery: 0 00:21:36.196 Transport Requirements: 00:21:36.196 Secure Channel: Not Required 00:21:36.196 Port ID: 0 (0x0000) 00:21:36.196 Controller ID: 65535 (0xffff) 00:21:36.196 Admin Max SQ Size: 128 00:21:36.196 Transport Service Identifier: 4420 00:21:36.196 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:36.196 Transport Address: 10.0.0.2 [2024-12-10 04:09:30.537779] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:36.196 [2024-12-10 04:09:30.537802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549100) on tqpair=0x14e7690 00:21:36.196 [2024-12-10 04:09:30.537817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.196 [2024-12-10 04:09:30.537827] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549280) on tqpair=0x14e7690 00:21:36.196 [2024-12-10 04:09:30.537835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.196 [2024-12-10 04:09:30.537843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549400) on tqpair=0x14e7690 00:21:36.196 [2024-12-10 04:09:30.537851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.196 [2024-12-10 04:09:30.537859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549580) on tqpair=0x14e7690 00:21:36.196 [2024-12-10 04:09:30.537866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.196 [2024-12-10 04:09:30.537884] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.196 [2024-12-10 04:09:30.537894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.196 [2024-12-10 04:09:30.537901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e7690) 00:21:36.196 [2024-12-10 04:09:30.537912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.196 [2024-12-10 04:09:30.537937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549580, cid 3, qid 0 00:21:36.196 [2024-12-10 04:09:30.538042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.196 [2024-12-10 04:09:30.538054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.196 [2024-12-10 04:09:30.538061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.196 [2024-12-10 04:09:30.538068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549580) on tqpair=0x14e7690 00:21:36.196 [2024-12-10 04:09:30.538081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.196 [2024-12-10 04:09:30.538088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.196 [2024-12-10 04:09:30.538095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e7690) 00:21:36.196 [2024-12-10 04:09:30.538105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.196 [2024-12-10 04:09:30.538136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549580, cid 3, qid 0 00:21:36.196 [2024-12-10 04:09:30.538244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.196 [2024-12-10 04:09:30.538258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.196 [2024-12-10 04:09:30.538265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.196 [2024-12-10 04:09:30.538272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549580) on tqpair=0x14e7690 00:21:36.196 [2024-12-10 04:09:30.538281] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:36.196 [2024-12-10 04:09:30.538289] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:36.197 [2024-12-10 04:09:30.538305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.538314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.538321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e7690) 00:21:36.197 [2024-12-10 04:09:30.538331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.197 [2024-12-10 04:09:30.538352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549580, cid 3, qid 0 00:21:36.197 [2024-12-10 04:09:30.538442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.197 [2024-12-10 04:09:30.538454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.197 [2024-12-10 04:09:30.538461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.538468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549580) on tqpair=0x14e7690 00:21:36.197 [2024-12-10 04:09:30.538485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.538494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.538500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e7690) 00:21:36.197 [2024-12-10 04:09:30.538511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.197 [2024-12-10 04:09:30.538532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549580, cid 3, qid 0 00:21:36.197 [2024-12-10 04:09:30.538631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.197 [2024-12-10 04:09:30.538645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.197 [2024-12-10 04:09:30.538652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.538659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549580) on tqpair=0x14e7690 00:21:36.197 [2024-12-10 04:09:30.538675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.538684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.538691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e7690) 00:21:36.197 [2024-12-10 04:09:30.538701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.197 [2024-12-10 04:09:30.538722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549580, cid 3, qid 0 00:21:36.197 [2024-12-10 04:09:30.538795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.197 [2024-12-10 04:09:30.538807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.197 [2024-12-10 04:09:30.538821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.538828] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549580) on tqpair=0x14e7690 00:21:36.197 [2024-12-10 04:09:30.538843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.538852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.538864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e7690) 00:21:36.197 [2024-12-10 04:09:30.538875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.197 [2024-12-10 04:09:30.538897] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549580, cid 3, qid 0 00:21:36.197 [2024-12-10 04:09:30.538972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.197 [2024-12-10 04:09:30.538985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.197 [2024-12-10 04:09:30.538992] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.538999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549580) on tqpair=0x14e7690 00:21:36.197 [2024-12-10 04:09:30.539015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.539024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.539031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e7690) 00:21:36.197 [2024-12-10 04:09:30.539041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.197 [2024-12-10 04:09:30.539062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549580, cid 3, qid 0 00:21:36.197 [2024-12-10 04:09:30.539157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.197 [2024-12-10 04:09:30.539170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.197 [2024-12-10 04:09:30.539177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.539184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549580) on tqpair=0x14e7690 00:21:36.197 [2024-12-10 04:09:30.539200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.539209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.539215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e7690) 00:21:36.197 [2024-12-10 04:09:30.539226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.197 [2024-12-10 04:09:30.539246] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549580, cid 3, qid 0 00:21:36.197 [2024-12-10 04:09:30.539320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.197 [2024-12-10 04:09:30.539332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.197 [2024-12-10 04:09:30.539338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.539345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549580) on tqpair=0x14e7690 00:21:36.197 [2024-12-10 04:09:30.539361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.539370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.539376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e7690) 00:21:36.197 [2024-12-10 04:09:30.539387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.197 [2024-12-10 04:09:30.539407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549580, cid 3, qid 0 00:21:36.197 [2024-12-10 04:09:30.539489] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.197 [2024-12-10 04:09:30.539501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.197 [2024-12-10 04:09:30.539507] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.539514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549580) on tqpair=0x14e7690 00:21:36.197 [2024-12-10 04:09:30.539529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.539539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.543573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e7690) 00:21:36.197 [2024-12-10 04:09:30.543605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.197 [2024-12-10 04:09:30.543629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1549580, cid 3, qid 0 00:21:36.197 [2024-12-10 04:09:30.543719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.197 [2024-12-10 04:09:30.543733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.197 [2024-12-10 04:09:30.543740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.197 [2024-12-10 04:09:30.543747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1549580) on tqpair=0x14e7690 00:21:36.197 [2024-12-10 04:09:30.543761] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:21:36.197 00:21:36.197 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:36.459 [2024-12-10 04:09:30.582634] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:36.459 [2024-12-10 04:09:30.582686] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2454009 ] 00:21:36.459 [2024-12-10 04:09:30.644942] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:36.459 [2024-12-10 04:09:30.645007] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:36.459 [2024-12-10 04:09:30.645018] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:36.459 [2024-12-10 04:09:30.645040] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:36.459 [2024-12-10 04:09:30.645055] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:36.459 [2024-12-10 04:09:30.649025] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:36.459 [2024-12-10 04:09:30.649069] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1fd0690 0 00:21:36.459 [2024-12-10 04:09:30.655555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:36.459 [2024-12-10 04:09:30.655576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:36.459 [2024-12-10 04:09:30.655590] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:36.459 [2024-12-10 04:09:30.655604] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:36.459 [2024-12-10 04:09:30.655641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.459 [2024-12-10 04:09:30.655653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.459 [2024-12-10 04:09:30.655661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd0690) 00:21:36.459 [2024-12-10 04:09:30.655682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:36.459 [2024-12-10 04:09:30.655710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032100, cid 0, qid 0 00:21:36.459 [2024-12-10 04:09:30.663561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.459 [2024-12-10 04:09:30.663579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.459 [2024-12-10 04:09:30.663586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.459 [2024-12-10 04:09:30.663594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032100) on tqpair=0x1fd0690 00:21:36.459 [2024-12-10 04:09:30.663609] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:36.459 [2024-12-10 04:09:30.663627] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:36.459 [2024-12-10 04:09:30.663638] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:36.459 [2024-12-10 04:09:30.663659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.459 [2024-12-10 04:09:30.663668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.459 [2024-12-10 04:09:30.663675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd0690) 00:21:36.459 [2024-12-10 04:09:30.663686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.459 [2024-12-10 04:09:30.663711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032100, cid 0, qid 0 00:21:36.459 [2024-12-10 04:09:30.663841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.459 [2024-12-10 04:09:30.663855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.459 [2024-12-10 04:09:30.663861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.459 [2024-12-10 04:09:30.663868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032100) on tqpair=0x1fd0690 00:21:36.459 [2024-12-10 04:09:30.663881] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:36.459 [2024-12-10 04:09:30.663895] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:36.459 [2024-12-10 04:09:30.663908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.459 [2024-12-10 04:09:30.663916] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.459 [2024-12-10 04:09:30.663922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd0690) 00:21:36.459 [2024-12-10 04:09:30.663933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.459 [2024-12-10 04:09:30.663955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032100, cid 0, qid 0 00:21:36.459 [2024-12-10 04:09:30.664036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.459 [2024-12-10 04:09:30.664050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.459 [2024-12-10 04:09:30.664057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.459 [2024-12-10 04:09:30.664063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032100) on tqpair=0x1fd0690 00:21:36.459 [2024-12-10 04:09:30.664074] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:36.459 [2024-12-10 04:09:30.664089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:36.459 [2024-12-10 04:09:30.664101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.664109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.664115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd0690) 00:21:36.460 [2024-12-10 04:09:30.664125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.460 [2024-12-10 04:09:30.664147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032100, cid 0, qid 0 00:21:36.460 [2024-12-10 04:09:30.664245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.460 [2024-12-10 04:09:30.664257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.460 [2024-12-10 04:09:30.664264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.664271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032100) on tqpair=0x1fd0690 00:21:36.460 [2024-12-10 04:09:30.664280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:36.460 [2024-12-10 04:09:30.664302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.664312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.664318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd0690) 00:21:36.460 [2024-12-10 04:09:30.664328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.460 [2024-12-10 04:09:30.664350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032100, cid 0, qid 0 00:21:36.460 [2024-12-10 04:09:30.664443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.460 [2024-12-10 04:09:30.664456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.460 [2024-12-10 04:09:30.664463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.664470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032100) on tqpair=0x1fd0690 00:21:36.460 [2024-12-10 04:09:30.664479] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:36.460 [2024-12-10 04:09:30.664488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:36.460 [2024-12-10 04:09:30.664501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:36.460 [2024-12-10 04:09:30.664611] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:36.460 [2024-12-10 04:09:30.664622] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:36.460 [2024-12-10 04:09:30.664636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.664644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.664650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd0690) 00:21:36.460 [2024-12-10 04:09:30.664660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.460 [2024-12-10 04:09:30.664682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032100, cid 0, qid 0 00:21:36.460 [2024-12-10 04:09:30.664808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.460 [2024-12-10 04:09:30.664819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.460 [2024-12-10 04:09:30.664826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.664833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032100) on tqpair=0x1fd0690 00:21:36.460 [2024-12-10 04:09:30.664841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:36.460 [2024-12-10 04:09:30.664856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.664865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.664871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd0690) 00:21:36.460 [2024-12-10 04:09:30.664881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.460 [2024-12-10 04:09:30.664902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032100, cid 0, qid 0 00:21:36.460 [2024-12-10 04:09:30.664983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.460 [2024-12-10 04:09:30.664997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.460 [2024-12-10 04:09:30.665004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032100) on tqpair=0x1fd0690 00:21:36.460 [2024-12-10 04:09:30.665018] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:36.460 [2024-12-10 04:09:30.665031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:36.460 [2024-12-10 04:09:30.665045] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:36.460 [2024-12-10 04:09:30.665061] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:36.460 [2024-12-10 04:09:30.665076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd0690) 00:21:36.460 [2024-12-10 04:09:30.665096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.460 [2024-12-10 04:09:30.665117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032100, cid 0, qid 0 00:21:36.460 [2024-12-10 04:09:30.665252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.460 [2024-12-10 04:09:30.665267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.460 [2024-12-10 04:09:30.665274] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665280] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd0690): datao=0, datal=4096, cccid=0 00:21:36.460 [2024-12-10 04:09:30.665288] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2032100) on tqpair(0x1fd0690): expected_datao=0, payload_size=4096 00:21:36.460 [2024-12-10 04:09:30.665296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665307] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665315] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.460 [2024-12-10 04:09:30.665345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.460 [2024-12-10 04:09:30.665352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032100) on tqpair=0x1fd0690 00:21:36.460 [2024-12-10 04:09:30.665377] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:36.460 [2024-12-10 04:09:30.665387] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:36.460 [2024-12-10 04:09:30.665394] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:36.460 [2024-12-10 04:09:30.665401] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:36.460 [2024-12-10 04:09:30.665409] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:36.460 [2024-12-10 04:09:30.665417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:36.460 [2024-12-10 04:09:30.665432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:36.460 [2024-12-10 04:09:30.665444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd0690) 00:21:36.460 [2024-12-10 04:09:30.665469] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:36.460 [2024-12-10 04:09:30.665491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032100, cid 0, qid 0 00:21:36.460 [2024-12-10 04:09:30.665625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.460 [2024-12-10 04:09:30.665640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.460 [2024-12-10 04:09:30.665648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032100) on tqpair=0x1fd0690 00:21:36.460 [2024-12-10 04:09:30.665667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665681] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd0690) 00:21:36.460 [2024-12-10 04:09:30.665690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.460 [2024-12-10 04:09:30.665701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1fd0690) 00:21:36.460 [2024-12-10 04:09:30.665723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.460 [2024-12-10 04:09:30.665732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1fd0690) 00:21:36.460 [2024-12-10 04:09:30.665754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.460 [2024-12-10 04:09:30.665763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.460 [2024-12-10 04:09:30.665785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.460 [2024-12-10 04:09:30.665794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:36.460 [2024-12-10 04:09:30.665813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:36.460 [2024-12-10 04:09:30.665827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.460 [2024-12-10 04:09:30.665834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd0690) 00:21:36.460 [2024-12-10 04:09:30.665844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.460 [2024-12-10 04:09:30.665867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032100, cid 0, qid 0 00:21:36.460 [2024-12-10 04:09:30.665878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032280, cid 1, qid 0 00:21:36.461 [2024-12-10 04:09:30.665886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032400, cid 2, qid 0 00:21:36.461 [2024-12-10 04:09:30.665894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.461 [2024-12-10 04:09:30.665901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032700, cid 4, qid 0 00:21:36.461 [2024-12-10 04:09:30.666023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.461 [2024-12-10 04:09:30.666036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.461 [2024-12-10 04:09:30.666042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.666049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032700) on tqpair=0x1fd0690 00:21:36.461 [2024-12-10 04:09:30.666060] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:36.461 [2024-12-10 04:09:30.666073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:36.461 [2024-12-10 04:09:30.666088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:36.461 [2024-12-10 04:09:30.666100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:36.461 [2024-12-10 04:09:30.666110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.666118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.666124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd0690) 00:21:36.461 [2024-12-10 04:09:30.666134] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:36.461 [2024-12-10 04:09:30.666156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032700, cid 4, qid 0 00:21:36.461 [2024-12-10 04:09:30.666280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.461 [2024-12-10 04:09:30.666292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.461 [2024-12-10 04:09:30.666299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.666305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032700) on tqpair=0x1fd0690 00:21:36.461 [2024-12-10 04:09:30.666375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:36.461 [2024-12-10 04:09:30.666396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:36.461 [2024-12-10 04:09:30.666412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.666420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd0690) 00:21:36.461 [2024-12-10 04:09:30.666430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.461 [2024-12-10 04:09:30.666452] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032700, cid 4, qid 0 00:21:36.461 [2024-12-10 04:09:30.666559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.461 [2024-12-10 04:09:30.666574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.461 [2024-12-10 04:09:30.666581] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.666587] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd0690): datao=0, datal=4096, cccid=4 00:21:36.461 [2024-12-10 04:09:30.666594] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2032700) on tqpair(0x1fd0690): expected_datao=0, payload_size=4096 00:21:36.461 [2024-12-10 04:09:30.666601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.666619] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.666628] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.666639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.461 [2024-12-10 04:09:30.666648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.461 [2024-12-10 04:09:30.666655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.666662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032700) on tqpair=0x1fd0690 00:21:36.461 [2024-12-10 04:09:30.666680] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:36.461 [2024-12-10 04:09:30.666704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:36.461 [2024-12-10 04:09:30.666726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:36.461 [2024-12-10 04:09:30.666741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.666749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd0690) 00:21:36.461 [2024-12-10 04:09:30.666759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.461 [2024-12-10 04:09:30.666781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032700, cid 4, qid 0 00:21:36.461 [2024-12-10 04:09:30.666909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.461 [2024-12-10 04:09:30.666922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.461 [2024-12-10 04:09:30.666929] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.666936] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd0690): datao=0, datal=4096, cccid=4 00:21:36.461 [2024-12-10 04:09:30.666943] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2032700) on tqpair(0x1fd0690): expected_datao=0, payload_size=4096 00:21:36.461 [2024-12-10 04:09:30.666950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.666967] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.666976] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.667008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.461 [2024-12-10 04:09:30.667020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.461 [2024-12-10 04:09:30.667027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.667034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032700) on tqpair=0x1fd0690 00:21:36.461 [2024-12-10 04:09:30.667058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:36.461 [2024-12-10 04:09:30.667078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:36.461 [2024-12-10 04:09:30.667092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.667100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd0690) 00:21:36.461 [2024-12-10 04:09:30.667110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.461 [2024-12-10 04:09:30.667132] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032700, cid 4, qid 0 00:21:36.461 [2024-12-10 04:09:30.667226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.461 [2024-12-10 04:09:30.667239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.461 [2024-12-10 04:09:30.667246] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.667252] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd0690): datao=0, datal=4096, cccid=4 00:21:36.461 [2024-12-10 04:09:30.667260] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2032700) on tqpair(0x1fd0690): expected_datao=0, payload_size=4096 00:21:36.461 [2024-12-10 04:09:30.667267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.667283] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.667292] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.667303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.461 [2024-12-10 04:09:30.667312] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.461 [2024-12-10 04:09:30.667319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.667326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032700) on tqpair=0x1fd0690 00:21:36.461 [2024-12-10 04:09:30.667343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:36.461 [2024-12-10 04:09:30.667359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:36.461 [2024-12-10 04:09:30.667374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:36.461 [2024-12-10 04:09:30.667389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:36.461 [2024-12-10 04:09:30.667398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:36.461 [2024-12-10 04:09:30.667407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:36.461 [2024-12-10 04:09:30.667417] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:36.461 [2024-12-10 04:09:30.667425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:36.461 [2024-12-10 04:09:30.667433] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:36.461 [2024-12-10 04:09:30.667453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.667462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd0690) 00:21:36.461 [2024-12-10 04:09:30.667472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.461 [2024-12-10 04:09:30.667483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.667490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.667497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd0690) 00:21:36.461 [2024-12-10 04:09:30.667506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.461 [2024-12-10 04:09:30.667531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032700, cid 4, qid 0 00:21:36.461 [2024-12-10 04:09:30.671554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032880, cid 5, qid 0 00:21:36.461 [2024-12-10 04:09:30.671575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.461 [2024-12-10 04:09:30.671586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.461 [2024-12-10 04:09:30.671593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.461 [2024-12-10 04:09:30.671600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032700) on tqpair=0x1fd0690 00:21:36.461 [2024-12-10 04:09:30.671610] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.461 [2024-12-10 04:09:30.671619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.462 [2024-12-10 04:09:30.671626] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.671632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032880) on tqpair=0x1fd0690 00:21:36.462 [2024-12-10 04:09:30.671650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.671659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd0690) 00:21:36.462 [2024-12-10 04:09:30.671669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.462 [2024-12-10 04:09:30.671692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032880, cid 5, qid 0 00:21:36.462 [2024-12-10 04:09:30.671790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.462 [2024-12-10 04:09:30.671802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.462 [2024-12-10 04:09:30.671813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.671820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032880) on tqpair=0x1fd0690 00:21:36.462 [2024-12-10 04:09:30.671835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.671844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd0690) 00:21:36.462 [2024-12-10 04:09:30.671854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.462 [2024-12-10 04:09:30.671874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032880, cid 5, qid 0 00:21:36.462 [2024-12-10 04:09:30.671965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.462 [2024-12-10 04:09:30.671977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.462 [2024-12-10 04:09:30.671984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.671991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032880) on tqpair=0x1fd0690 00:21:36.462 [2024-12-10 04:09:30.672006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd0690) 00:21:36.462 [2024-12-10 04:09:30.672025] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.462 [2024-12-10 04:09:30.672045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032880, cid 5, qid 0 00:21:36.462 [2024-12-10 04:09:30.672135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.462 [2024-12-10 04:09:30.672147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.462 [2024-12-10 04:09:30.672154] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032880) on tqpair=0x1fd0690 00:21:36.462 [2024-12-10 04:09:30.672186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd0690) 00:21:36.462 [2024-12-10 04:09:30.672208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.462 [2024-12-10 04:09:30.672221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd0690) 00:21:36.462 [2024-12-10 04:09:30.672238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.462 [2024-12-10 04:09:30.672250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1fd0690) 00:21:36.462 [2024-12-10 04:09:30.672266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.462 [2024-12-10 04:09:30.672279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1fd0690) 00:21:36.462 [2024-12-10 04:09:30.672296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.462 [2024-12-10 04:09:30.672318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032880, cid 5, qid 0 00:21:36.462 [2024-12-10 04:09:30.672329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032700, cid 4, qid 0 00:21:36.462 [2024-12-10 04:09:30.672336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032a00, cid 6, qid 0 00:21:36.462 [2024-12-10 04:09:30.672347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032b80, cid 7, qid 0 00:21:36.462 [2024-12-10 04:09:30.672521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.462 [2024-12-10 04:09:30.672536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.462 [2024-12-10 04:09:30.672551] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672559] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd0690): datao=0, datal=8192, cccid=5 00:21:36.462 [2024-12-10 04:09:30.672567] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2032880) on tqpair(0x1fd0690): expected_datao=0, payload_size=8192 00:21:36.462 [2024-12-10 04:09:30.672574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672596] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672606] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.462 [2024-12-10 04:09:30.672624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.462 [2024-12-10 04:09:30.672630] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672637] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd0690): datao=0, datal=512, cccid=4 00:21:36.462 [2024-12-10 04:09:30.672644] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2032700) on tqpair(0x1fd0690): expected_datao=0, payload_size=512 00:21:36.462 [2024-12-10 04:09:30.672651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672660] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672668] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.462 [2024-12-10 04:09:30.672685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.462 [2024-12-10 04:09:30.672691] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672697] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd0690): datao=0, datal=512, cccid=6 00:21:36.462 [2024-12-10 04:09:30.672705] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2032a00) on tqpair(0x1fd0690): expected_datao=0, payload_size=512 00:21:36.462 [2024-12-10 04:09:30.672712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672721] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672728] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.462 [2024-12-10 04:09:30.672744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.462 [2024-12-10 04:09:30.672751] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672757] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd0690): datao=0, datal=4096, cccid=7 00:21:36.462 [2024-12-10 04:09:30.672765] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2032b80) on tqpair(0x1fd0690): expected_datao=0, payload_size=4096 00:21:36.462 [2024-12-10 04:09:30.672772] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672781] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.672789] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.717583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.462 [2024-12-10 04:09:30.717602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.462 [2024-12-10 04:09:30.717610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.717617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032880) on tqpair=0x1fd0690 00:21:36.462 [2024-12-10 04:09:30.717638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.462 [2024-12-10 04:09:30.717654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.462 [2024-12-10 04:09:30.717661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.717668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032700) on tqpair=0x1fd0690 00:21:36.462 [2024-12-10 04:09:30.717684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.462 [2024-12-10 04:09:30.717695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.462 [2024-12-10 04:09:30.717702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.717708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032a00) on tqpair=0x1fd0690 00:21:36.462 [2024-12-10 04:09:30.717718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.462 [2024-12-10 04:09:30.717728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.462 [2024-12-10 04:09:30.717735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.462 [2024-12-10 04:09:30.717741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032b80) on tqpair=0x1fd0690 00:21:36.462 ===================================================== 00:21:36.462 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:36.462 ===================================================== 00:21:36.462 Controller Capabilities/Features 00:21:36.462 ================================ 00:21:36.462 Vendor ID: 8086 00:21:36.462 Subsystem Vendor ID: 8086 00:21:36.462 Serial Number: SPDK00000000000001 00:21:36.462 Model Number: SPDK bdev Controller 00:21:36.462 Firmware Version: 25.01 00:21:36.462 Recommended Arb Burst: 6 00:21:36.462 IEEE OUI Identifier: e4 d2 5c 00:21:36.462 Multi-path I/O 00:21:36.462 May have multiple subsystem ports: Yes 00:21:36.462 May have multiple controllers: Yes 00:21:36.462 Associated with SR-IOV VF: No 00:21:36.462 Max Data Transfer Size: 131072 00:21:36.462 Max Number of Namespaces: 32 00:21:36.462 Max Number of I/O Queues: 127 00:21:36.462 NVMe Specification Version (VS): 1.3 00:21:36.462 NVMe Specification Version (Identify): 1.3 00:21:36.462 Maximum Queue Entries: 128 00:21:36.463 Contiguous Queues Required: Yes 00:21:36.463 Arbitration Mechanisms Supported 00:21:36.463 Weighted Round Robin: Not Supported 00:21:36.463 Vendor Specific: Not Supported 00:21:36.463 Reset Timeout: 15000 ms 00:21:36.463 Doorbell Stride: 4 bytes 00:21:36.463 NVM Subsystem Reset: Not Supported 00:21:36.463 Command Sets Supported 00:21:36.463 NVM Command Set: Supported 00:21:36.463 Boot Partition: Not Supported 00:21:36.463 Memory Page Size Minimum: 4096 bytes 00:21:36.463 Memory Page Size Maximum: 4096 bytes 00:21:36.463 Persistent Memory Region: Not Supported 00:21:36.463 Optional Asynchronous Events Supported 00:21:36.463 Namespace Attribute Notices: Supported 00:21:36.463 Firmware Activation Notices: Not Supported 00:21:36.463 ANA Change Notices: Not Supported 00:21:36.463 PLE Aggregate Log Change Notices: Not Supported 00:21:36.463 LBA Status Info Alert Notices: Not Supported 00:21:36.463 EGE Aggregate Log Change Notices: Not Supported 00:21:36.463 Normal NVM Subsystem Shutdown event: Not Supported 00:21:36.463 Zone Descriptor Change Notices: Not Supported 00:21:36.463 Discovery Log Change Notices: Not Supported 00:21:36.463 Controller Attributes 00:21:36.463 128-bit Host Identifier: Supported 00:21:36.463 Non-Operational Permissive Mode: Not Supported 00:21:36.463 NVM Sets: Not Supported 00:21:36.463 Read Recovery Levels: Not Supported 00:21:36.463 Endurance Groups: Not Supported 00:21:36.463 Predictable Latency Mode: Not Supported 00:21:36.463 Traffic Based Keep ALive: Not Supported 00:21:36.463 Namespace Granularity: Not Supported 00:21:36.463 SQ Associations: Not Supported 00:21:36.463 UUID List: Not Supported 00:21:36.463 Multi-Domain Subsystem: Not Supported 00:21:36.463 Fixed Capacity Management: Not Supported 00:21:36.463 Variable Capacity Management: Not Supported 00:21:36.463 Delete Endurance Group: Not Supported 00:21:36.463 Delete NVM Set: Not Supported 00:21:36.463 Extended LBA Formats Supported: Not Supported 00:21:36.463 Flexible Data Placement Supported: Not Supported 00:21:36.463 00:21:36.463 Controller Memory Buffer Support 00:21:36.463 ================================ 00:21:36.463 Supported: No 00:21:36.463 00:21:36.463 Persistent Memory Region Support 00:21:36.463 ================================ 00:21:36.463 Supported: No 00:21:36.463 00:21:36.463 Admin Command Set Attributes 00:21:36.463 ============================ 00:21:36.463 Security Send/Receive: Not Supported 00:21:36.463 Format NVM: Not Supported 00:21:36.463 Firmware Activate/Download: Not Supported 00:21:36.463 Namespace Management: Not Supported 00:21:36.463 Device Self-Test: Not Supported 00:21:36.463 Directives: Not Supported 00:21:36.463 NVMe-MI: Not Supported 00:21:36.463 Virtualization Management: Not Supported 00:21:36.463 Doorbell Buffer Config: Not Supported 00:21:36.463 Get LBA Status Capability: Not Supported 00:21:36.463 Command & Feature Lockdown Capability: Not Supported 00:21:36.463 Abort Command Limit: 4 00:21:36.463 Async Event Request Limit: 4 00:21:36.463 Number of Firmware Slots: N/A 00:21:36.463 Firmware Slot 1 Read-Only: N/A 00:21:36.463 Firmware Activation Without Reset: N/A 00:21:36.463 Multiple Update Detection Support: N/A 00:21:36.463 Firmware Update Granularity: No Information Provided 00:21:36.463 Per-Namespace SMART Log: No 00:21:36.463 Asymmetric Namespace Access Log Page: Not Supported 00:21:36.463 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:36.463 Command Effects Log Page: Supported 00:21:36.463 Get Log Page Extended Data: Supported 00:21:36.463 Telemetry Log Pages: Not Supported 00:21:36.463 Persistent Event Log Pages: Not Supported 00:21:36.463 Supported Log Pages Log Page: May Support 00:21:36.463 Commands Supported & Effects Log Page: Not Supported 00:21:36.463 Feature Identifiers & Effects Log Page:May Support 00:21:36.463 NVMe-MI Commands & Effects Log Page: May Support 00:21:36.463 Data Area 4 for Telemetry Log: Not Supported 00:21:36.463 Error Log Page Entries Supported: 128 00:21:36.463 Keep Alive: Supported 00:21:36.463 Keep Alive Granularity: 10000 ms 00:21:36.463 00:21:36.463 NVM Command Set Attributes 00:21:36.463 ========================== 00:21:36.463 Submission Queue Entry Size 00:21:36.463 Max: 64 00:21:36.463 Min: 64 00:21:36.463 Completion Queue Entry Size 00:21:36.463 Max: 16 00:21:36.463 Min: 16 00:21:36.463 Number of Namespaces: 32 00:21:36.463 Compare Command: Supported 00:21:36.463 Write Uncorrectable Command: Not Supported 00:21:36.463 Dataset Management Command: Supported 00:21:36.463 Write Zeroes Command: Supported 00:21:36.463 Set Features Save Field: Not Supported 00:21:36.463 Reservations: Supported 00:21:36.463 Timestamp: Not Supported 00:21:36.463 Copy: Supported 00:21:36.463 Volatile Write Cache: Present 00:21:36.463 Atomic Write Unit (Normal): 1 00:21:36.463 Atomic Write Unit (PFail): 1 00:21:36.463 Atomic Compare & Write Unit: 1 00:21:36.463 Fused Compare & Write: Supported 00:21:36.463 Scatter-Gather List 00:21:36.463 SGL Command Set: Supported 00:21:36.463 SGL Keyed: Supported 00:21:36.463 SGL Bit Bucket Descriptor: Not Supported 00:21:36.463 SGL Metadata Pointer: Not Supported 00:21:36.463 Oversized SGL: Not Supported 00:21:36.463 SGL Metadata Address: Not Supported 00:21:36.463 SGL Offset: Supported 00:21:36.463 Transport SGL Data Block: Not Supported 00:21:36.463 Replay Protected Memory Block: Not Supported 00:21:36.463 00:21:36.463 Firmware Slot Information 00:21:36.463 ========================= 00:21:36.463 Active slot: 1 00:21:36.463 Slot 1 Firmware Revision: 25.01 00:21:36.463 00:21:36.463 00:21:36.463 Commands Supported and Effects 00:21:36.463 ============================== 00:21:36.463 Admin Commands 00:21:36.463 -------------- 00:21:36.463 Get Log Page (02h): Supported 00:21:36.463 Identify (06h): Supported 00:21:36.463 Abort (08h): Supported 00:21:36.463 Set Features (09h): Supported 00:21:36.463 Get Features (0Ah): Supported 00:21:36.463 Asynchronous Event Request (0Ch): Supported 00:21:36.463 Keep Alive (18h): Supported 00:21:36.463 I/O Commands 00:21:36.463 ------------ 00:21:36.463 Flush (00h): Supported LBA-Change 00:21:36.463 Write (01h): Supported LBA-Change 00:21:36.463 Read (02h): Supported 00:21:36.463 Compare (05h): Supported 00:21:36.463 Write Zeroes (08h): Supported LBA-Change 00:21:36.463 Dataset Management (09h): Supported LBA-Change 00:21:36.463 Copy (19h): Supported LBA-Change 00:21:36.463 00:21:36.463 Error Log 00:21:36.463 ========= 00:21:36.463 00:21:36.463 Arbitration 00:21:36.463 =========== 00:21:36.463 Arbitration Burst: 1 00:21:36.463 00:21:36.463 Power Management 00:21:36.463 ================ 00:21:36.463 Number of Power States: 1 00:21:36.463 Current Power State: Power State #0 00:21:36.463 Power State #0: 00:21:36.463 Max Power: 0.00 W 00:21:36.463 Non-Operational State: Operational 00:21:36.463 Entry Latency: Not Reported 00:21:36.463 Exit Latency: Not Reported 00:21:36.463 Relative Read Throughput: 0 00:21:36.463 Relative Read Latency: 0 00:21:36.463 Relative Write Throughput: 0 00:21:36.463 Relative Write Latency: 0 00:21:36.463 Idle Power: Not Reported 00:21:36.463 Active Power: Not Reported 00:21:36.463 Non-Operational Permissive Mode: Not Supported 00:21:36.463 00:21:36.463 Health Information 00:21:36.463 ================== 00:21:36.463 Critical Warnings: 00:21:36.463 Available Spare Space: OK 00:21:36.463 Temperature: OK 00:21:36.463 Device Reliability: OK 00:21:36.463 Read Only: No 00:21:36.463 Volatile Memory Backup: OK 00:21:36.463 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:36.463 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:36.463 Available Spare: 0% 00:21:36.463 Available Spare Threshold: 0% 00:21:36.463 Life Percentage Used:[2024-12-10 04:09:30.717858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.463 [2024-12-10 04:09:30.717870] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1fd0690) 00:21:36.463 [2024-12-10 04:09:30.717882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.463 [2024-12-10 04:09:30.717906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032b80, cid 7, qid 0 00:21:36.463 [2024-12-10 04:09:30.718005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.463 [2024-12-10 04:09:30.718019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.463 [2024-12-10 04:09:30.718026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.463 [2024-12-10 04:09:30.718033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032b80) on tqpair=0x1fd0690 00:21:36.464 [2024-12-10 04:09:30.718082] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:36.464 [2024-12-10 04:09:30.718103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032100) on tqpair=0x1fd0690 00:21:36.464 [2024-12-10 04:09:30.718115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.464 [2024-12-10 04:09:30.718124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032280) on tqpair=0x1fd0690 00:21:36.464 [2024-12-10 04:09:30.718132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.464 [2024-12-10 04:09:30.718140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032400) on tqpair=0x1fd0690 00:21:36.464 [2024-12-10 04:09:30.718147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.464 [2024-12-10 04:09:30.718155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.464 [2024-12-10 04:09:30.718163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.464 [2024-12-10 04:09:30.718175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.718183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.718190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.464 [2024-12-10 04:09:30.718200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.464 [2024-12-10 04:09:30.718225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.464 [2024-12-10 04:09:30.718307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.464 [2024-12-10 04:09:30.718325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.464 [2024-12-10 04:09:30.718332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.718339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.464 [2024-12-10 04:09:30.718351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.718359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.718366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.464 [2024-12-10 04:09:30.718376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.464 [2024-12-10 04:09:30.718402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.464 [2024-12-10 04:09:30.718496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.464 [2024-12-10 04:09:30.718509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.464 [2024-12-10 04:09:30.718516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.718523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.464 [2024-12-10 04:09:30.718531] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:36.464 [2024-12-10 04:09:30.718539] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:36.464 [2024-12-10 04:09:30.718563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.718574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.718580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.464 [2024-12-10 04:09:30.718591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.464 [2024-12-10 04:09:30.718612] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.464 [2024-12-10 04:09:30.718693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.464 [2024-12-10 04:09:30.718706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.464 [2024-12-10 04:09:30.718713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.718720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.464 [2024-12-10 04:09:30.718738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.718747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.718754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.464 [2024-12-10 04:09:30.718764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.464 [2024-12-10 04:09:30.718784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.464 [2024-12-10 04:09:30.718888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.464 [2024-12-10 04:09:30.718903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.464 [2024-12-10 04:09:30.718910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.718917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.464 [2024-12-10 04:09:30.718933] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.718943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.718949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.464 [2024-12-10 04:09:30.718959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.464 [2024-12-10 04:09:30.718980] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.464 [2024-12-10 04:09:30.719055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.464 [2024-12-10 04:09:30.719070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.464 [2024-12-10 04:09:30.719076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.719083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.464 [2024-12-10 04:09:30.719099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.719109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.719115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.464 [2024-12-10 04:09:30.719125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.464 [2024-12-10 04:09:30.719146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.464 [2024-12-10 04:09:30.719225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.464 [2024-12-10 04:09:30.719237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.464 [2024-12-10 04:09:30.719244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.719250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.464 [2024-12-10 04:09:30.719266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.719275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.719281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.464 [2024-12-10 04:09:30.719292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.464 [2024-12-10 04:09:30.719312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.464 [2024-12-10 04:09:30.719395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.464 [2024-12-10 04:09:30.719408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.464 [2024-12-10 04:09:30.719415] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.719421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.464 [2024-12-10 04:09:30.719437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.719446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.719453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.464 [2024-12-10 04:09:30.719463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.464 [2024-12-10 04:09:30.719483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.464 [2024-12-10 04:09:30.719568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.464 [2024-12-10 04:09:30.719581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.464 [2024-12-10 04:09:30.719588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.719595] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.464 [2024-12-10 04:09:30.719611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.719620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.719626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.464 [2024-12-10 04:09:30.719637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.464 [2024-12-10 04:09:30.719658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.464 [2024-12-10 04:09:30.719741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.464 [2024-12-10 04:09:30.719759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.464 [2024-12-10 04:09:30.719766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.464 [2024-12-10 04:09:30.719773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.464 [2024-12-10 04:09:30.719789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.719799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.719805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.465 [2024-12-10 04:09:30.719815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.465 [2024-12-10 04:09:30.719836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.465 [2024-12-10 04:09:30.719915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.465 [2024-12-10 04:09:30.719928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.465 [2024-12-10 04:09:30.719935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.719942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.465 [2024-12-10 04:09:30.719958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.719967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.719973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.465 [2024-12-10 04:09:30.719984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.465 [2024-12-10 04:09:30.720004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.465 [2024-12-10 04:09:30.720076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.465 [2024-12-10 04:09:30.720088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.465 [2024-12-10 04:09:30.720095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.465 [2024-12-10 04:09:30.720116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.465 [2024-12-10 04:09:30.720142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.465 [2024-12-10 04:09:30.720162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.465 [2024-12-10 04:09:30.720244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.465 [2024-12-10 04:09:30.720255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.465 [2024-12-10 04:09:30.720262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.465 [2024-12-10 04:09:30.720284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.465 [2024-12-10 04:09:30.720309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.465 [2024-12-10 04:09:30.720329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.465 [2024-12-10 04:09:30.720409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.465 [2024-12-10 04:09:30.720422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.465 [2024-12-10 04:09:30.720433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.465 [2024-12-10 04:09:30.720456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720465] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.465 [2024-12-10 04:09:30.720482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.465 [2024-12-10 04:09:30.720502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.465 [2024-12-10 04:09:30.720592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.465 [2024-12-10 04:09:30.720605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.465 [2024-12-10 04:09:30.720612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.465 [2024-12-10 04:09:30.720635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.465 [2024-12-10 04:09:30.720660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.465 [2024-12-10 04:09:30.720681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.465 [2024-12-10 04:09:30.720763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.465 [2024-12-10 04:09:30.720774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.465 [2024-12-10 04:09:30.720781] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.465 [2024-12-10 04:09:30.720803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.465 [2024-12-10 04:09:30.720829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.465 [2024-12-10 04:09:30.720849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.465 [2024-12-10 04:09:30.720930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.465 [2024-12-10 04:09:30.720942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.465 [2024-12-10 04:09:30.720949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.465 [2024-12-10 04:09:30.720971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.720986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.465 [2024-12-10 04:09:30.720996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.465 [2024-12-10 04:09:30.721016] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.465 [2024-12-10 04:09:30.721095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.465 [2024-12-10 04:09:30.721108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.465 [2024-12-10 04:09:30.721115] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.721126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.465 [2024-12-10 04:09:30.721143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.721152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.721159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.465 [2024-12-10 04:09:30.721169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.465 [2024-12-10 04:09:30.721190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.465 [2024-12-10 04:09:30.721274] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.465 [2024-12-10 04:09:30.721287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.465 [2024-12-10 04:09:30.721294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.721301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.465 [2024-12-10 04:09:30.721316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.721326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.721332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.465 [2024-12-10 04:09:30.721342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.465 [2024-12-10 04:09:30.721363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.465 [2024-12-10 04:09:30.721445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.465 [2024-12-10 04:09:30.721456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.465 [2024-12-10 04:09:30.721463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.721470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.465 [2024-12-10 04:09:30.721485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.721494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.721501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.465 [2024-12-10 04:09:30.721511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.465 [2024-12-10 04:09:30.721531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.465 [2024-12-10 04:09:30.725556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.465 [2024-12-10 04:09:30.725574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.465 [2024-12-10 04:09:30.725581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.725588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.465 [2024-12-10 04:09:30.725606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.725616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.725622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd0690) 00:21:36.465 [2024-12-10 04:09:30.725633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.465 [2024-12-10 04:09:30.725655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2032580, cid 3, qid 0 00:21:36.465 [2024-12-10 04:09:30.725746] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.465 [2024-12-10 04:09:30.725760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.465 [2024-12-10 04:09:30.725767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.465 [2024-12-10 04:09:30.725774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2032580) on tqpair=0x1fd0690 00:21:36.465 [2024-12-10 04:09:30.725791] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:21:36.465 0% 00:21:36.465 Data Units Read: 0 00:21:36.465 Data Units Written: 0 00:21:36.465 Host Read Commands: 0 00:21:36.465 Host Write Commands: 0 00:21:36.466 Controller Busy Time: 0 minutes 00:21:36.466 Power Cycles: 0 00:21:36.466 Power On Hours: 0 hours 00:21:36.466 Unsafe Shutdowns: 0 00:21:36.466 Unrecoverable Media Errors: 0 00:21:36.466 Lifetime Error Log Entries: 0 00:21:36.466 Warning Temperature Time: 0 minutes 00:21:36.466 Critical Temperature Time: 0 minutes 00:21:36.466 00:21:36.466 Number of Queues 00:21:36.466 ================ 00:21:36.466 Number of I/O Submission Queues: 127 00:21:36.466 Number of I/O Completion Queues: 127 00:21:36.466 00:21:36.466 Active Namespaces 00:21:36.466 ================= 00:21:36.466 Namespace ID:1 00:21:36.466 Error Recovery Timeout: Unlimited 00:21:36.466 Command Set Identifier: NVM (00h) 00:21:36.466 Deallocate: Supported 00:21:36.466 Deallocated/Unwritten Error: Not Supported 00:21:36.466 Deallocated Read Value: Unknown 00:21:36.466 Deallocate in Write Zeroes: Not Supported 00:21:36.466 Deallocated Guard Field: 0xFFFF 00:21:36.466 Flush: Supported 00:21:36.466 Reservation: Supported 00:21:36.466 Namespace Sharing Capabilities: Multiple Controllers 00:21:36.466 Size (in LBAs): 131072 (0GiB) 00:21:36.466 Capacity (in LBAs): 131072 (0GiB) 00:21:36.466 Utilization (in LBAs): 131072 (0GiB) 00:21:36.466 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:36.466 EUI64: ABCDEF0123456789 00:21:36.466 UUID: 0c4272b2-f628-4a5e-9962-7d9906af0c51 00:21:36.466 Thin Provisioning: Not Supported 00:21:36.466 Per-NS Atomic Units: Yes 00:21:36.466 Atomic Boundary Size (Normal): 0 00:21:36.466 Atomic Boundary Size (PFail): 0 00:21:36.466 Atomic Boundary Offset: 0 00:21:36.466 Maximum Single Source Range Length: 65535 00:21:36.466 Maximum Copy Length: 65535 00:21:36.466 Maximum Source Range Count: 1 00:21:36.466 NGUID/EUI64 Never Reused: No 00:21:36.466 Namespace Write Protected: No 00:21:36.466 Number of LBA Formats: 1 00:21:36.466 Current LBA Format: LBA Format #00 00:21:36.466 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:36.466 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:36.466 rmmod nvme_tcp 00:21:36.466 rmmod nvme_fabrics 00:21:36.466 rmmod nvme_keyring 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2453917 ']' 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2453917 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2453917 ']' 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2453917 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.466 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2453917 00:21:36.725 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:36.725 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:36.725 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2453917' 00:21:36.725 killing process with pid 2453917 00:21:36.725 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2453917 00:21:36.725 04:09:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2453917 00:21:36.725 04:09:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:36.725 04:09:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:36.985 04:09:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:36.985 04:09:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:36.985 04:09:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:36.985 04:09:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:36.985 04:09:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:36.985 04:09:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:36.985 04:09:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:36.985 04:09:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.985 04:09:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.985 04:09:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.888 04:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:38.888 00:21:38.888 real 0m5.677s 00:21:38.888 user 0m4.913s 00:21:38.888 sys 0m1.956s 00:21:38.888 04:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.888 04:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:38.888 ************************************ 00:21:38.888 END TEST nvmf_identify 00:21:38.888 ************************************ 00:21:38.888 04:09:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:38.888 04:09:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:38.888 04:09:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.888 04:09:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.888 ************************************ 00:21:38.888 START TEST nvmf_perf 00:21:38.888 ************************************ 00:21:38.888 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:38.888 * Looking for test storage... 00:21:39.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:39.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.147 --rc genhtml_branch_coverage=1 00:21:39.147 --rc genhtml_function_coverage=1 00:21:39.147 --rc genhtml_legend=1 00:21:39.147 --rc geninfo_all_blocks=1 00:21:39.147 --rc geninfo_unexecuted_blocks=1 00:21:39.147 00:21:39.147 ' 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:39.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.147 --rc genhtml_branch_coverage=1 00:21:39.147 --rc genhtml_function_coverage=1 00:21:39.147 --rc genhtml_legend=1 00:21:39.147 --rc geninfo_all_blocks=1 00:21:39.147 --rc geninfo_unexecuted_blocks=1 00:21:39.147 00:21:39.147 ' 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:39.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.147 --rc genhtml_branch_coverage=1 00:21:39.147 --rc genhtml_function_coverage=1 00:21:39.147 --rc genhtml_legend=1 00:21:39.147 --rc geninfo_all_blocks=1 00:21:39.147 --rc geninfo_unexecuted_blocks=1 00:21:39.147 00:21:39.147 ' 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:39.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.147 --rc genhtml_branch_coverage=1 00:21:39.147 --rc genhtml_function_coverage=1 00:21:39.147 --rc genhtml_legend=1 00:21:39.147 --rc geninfo_all_blocks=1 00:21:39.147 --rc geninfo_unexecuted_blocks=1 00:21:39.147 00:21:39.147 ' 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:39.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.147 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:39.148 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:39.148 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:39.148 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.148 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.148 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.148 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:39.148 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:39.148 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:39.148 04:09:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:41.678 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:41.678 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:41.678 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:41.678 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:41.678 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:41.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:21:41.679 00:21:41.679 --- 10.0.0.2 ping statistics --- 00:21:41.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.679 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:21:41.679 00:21:41.679 --- 10.0.0.1 ping statistics --- 00:21:41.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.679 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2456011 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2456011 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2456011 ']' 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.679 04:09:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:41.679 [2024-12-10 04:09:35.773571] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:41.679 [2024-12-10 04:09:35.773647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.679 [2024-12-10 04:09:35.845907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.679 [2024-12-10 04:09:35.904154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.679 [2024-12-10 04:09:35.904207] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.679 [2024-12-10 04:09:35.904236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.679 [2024-12-10 04:09:35.904247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.679 [2024-12-10 04:09:35.904257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.679 [2024-12-10 04:09:35.905873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.679 [2024-12-10 04:09:35.905996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.679 [2024-12-10 04:09:35.906064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.679 [2024-12-10 04:09:35.906068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.679 04:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.679 04:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:41.679 04:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:41.679 04:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:41.679 04:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:41.679 04:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.679 04:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:41.679 04:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:44.966 04:09:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:44.966 04:09:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:45.224 04:09:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:21:45.224 04:09:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:45.482 04:09:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:45.482 04:09:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:21:45.482 04:09:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:45.482 04:09:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:45.482 04:09:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:45.740 [2024-12-10 04:09:40.023586] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.740 04:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:45.997 04:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:45.997 04:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:46.255 04:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:46.255 04:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:46.519 04:09:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.822 [2024-12-10 04:09:41.135609] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.822 04:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:47.105 04:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:21:47.105 04:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:47.105 04:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:47.105 04:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:48.485 Initializing NVMe Controllers 00:21:48.485 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:21:48.485 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:21:48.485 Initialization complete. Launching workers. 00:21:48.485 ======================================================== 00:21:48.485 Latency(us) 00:21:48.485 Device Information : IOPS MiB/s Average min max 00:21:48.485 PCIE (0000:88:00.0) NSID 1 from core 0: 85317.19 333.27 374.53 16.25 4620.88 00:21:48.485 ======================================================== 00:21:48.485 Total : 85317.19 333.27 374.53 16.25 4620.88 00:21:48.485 00:21:48.485 04:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:49.864 Initializing NVMe Controllers 00:21:49.864 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:49.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:49.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:49.865 Initialization complete. Launching workers. 00:21:49.865 ======================================================== 00:21:49.865 Latency(us) 00:21:49.865 Device Information : IOPS MiB/s Average min max 00:21:49.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 53.00 0.21 19350.42 151.75 45811.66 00:21:49.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 62.00 0.24 16635.63 7914.93 54868.41 00:21:49.865 ======================================================== 00:21:49.865 Total : 115.00 0.45 17886.79 151.75 54868.41 00:21:49.865 00:21:49.865 04:09:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:51.247 Initializing NVMe Controllers 00:21:51.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:51.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:51.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:51.247 Initialization complete. Launching workers. 00:21:51.247 ======================================================== 00:21:51.247 Latency(us) 00:21:51.247 Device Information : IOPS MiB/s Average min max 00:21:51.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8393.73 32.79 3813.72 542.74 10414.45 00:21:51.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3837.88 14.99 8364.89 6283.76 19573.52 00:21:51.247 ======================================================== 00:21:51.247 Total : 12231.61 47.78 5241.73 542.74 19573.52 00:21:51.247 00:21:51.247 04:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:51.247 04:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:51.247 04:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:53.782 Initializing NVMe Controllers 00:21:53.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:53.782 Controller IO queue size 128, less than required. 00:21:53.782 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:53.782 Controller IO queue size 128, less than required. 00:21:53.782 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:53.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:53.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:53.782 Initialization complete. Launching workers. 00:21:53.782 ======================================================== 00:21:53.782 Latency(us) 00:21:53.782 Device Information : IOPS MiB/s Average min max 00:21:53.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1746.75 436.69 74524.50 47711.96 113593.37 00:21:53.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 581.92 145.48 232187.70 78884.27 375219.45 00:21:53.782 ======================================================== 00:21:53.782 Total : 2328.66 582.17 113923.37 47711.96 375219.45 00:21:53.782 00:21:53.782 04:09:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:53.782 No valid NVMe controllers or AIO or URING devices found 00:21:53.782 Initializing NVMe Controllers 00:21:53.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:53.782 Controller IO queue size 128, less than required. 00:21:53.782 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:53.782 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:53.782 Controller IO queue size 128, less than required. 00:21:53.783 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:53.783 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:53.783 WARNING: Some requested NVMe devices were skipped 00:21:53.783 04:09:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:56.316 Initializing NVMe Controllers 00:21:56.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:56.316 Controller IO queue size 128, less than required. 00:21:56.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.316 Controller IO queue size 128, less than required. 00:21:56.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:56.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:56.316 Initialization complete. Launching workers. 00:21:56.316 00:21:56.316 ==================== 00:21:56.316 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:56.316 TCP transport: 00:21:56.316 polls: 9739 00:21:56.316 idle_polls: 6869 00:21:56.316 sock_completions: 2870 00:21:56.316 nvme_completions: 4931 00:21:56.316 submitted_requests: 7428 00:21:56.316 queued_requests: 1 00:21:56.316 00:21:56.316 ==================== 00:21:56.316 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:56.316 TCP transport: 00:21:56.316 polls: 12396 00:21:56.316 idle_polls: 8680 00:21:56.316 sock_completions: 3716 00:21:56.316 nvme_completions: 6619 00:21:56.316 submitted_requests: 9958 00:21:56.316 queued_requests: 1 00:21:56.316 ======================================================== 00:21:56.316 Latency(us) 00:21:56.316 Device Information : IOPS MiB/s Average min max 00:21:56.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1232.50 308.12 106654.30 65831.66 181510.50 00:21:56.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1654.49 413.62 77469.39 39342.56 110333.58 00:21:56.316 ======================================================== 00:21:56.316 Total : 2886.99 721.75 89928.83 39342.56 181510.50 00:21:56.316 00:21:56.574 04:09:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:56.574 04:09:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:56.837 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:56.837 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:56.837 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:56.838 rmmod nvme_tcp 00:21:56.838 rmmod nvme_fabrics 00:21:56.838 rmmod nvme_keyring 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2456011 ']' 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2456011 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2456011 ']' 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2456011 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2456011 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2456011' 00:21:56.838 killing process with pid 2456011 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2456011 00:21:56.838 04:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2456011 00:21:58.742 04:09:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:58.742 04:09:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:58.742 04:09:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:58.742 04:09:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:58.742 04:09:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:58.742 04:09:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:58.742 04:09:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:58.742 04:09:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:58.742 04:09:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:58.742 04:09:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.742 04:09:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.742 04:09:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:00.649 00:22:00.649 real 0m21.558s 00:22:00.649 user 1m5.426s 00:22:00.649 sys 0m5.812s 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:00.649 ************************************ 00:22:00.649 END TEST nvmf_perf 00:22:00.649 ************************************ 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.649 ************************************ 00:22:00.649 START TEST nvmf_fio_host 00:22:00.649 ************************************ 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:00.649 * Looking for test storage... 00:22:00.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:00.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.649 --rc genhtml_branch_coverage=1 00:22:00.649 --rc genhtml_function_coverage=1 00:22:00.649 --rc genhtml_legend=1 00:22:00.649 --rc geninfo_all_blocks=1 00:22:00.649 --rc geninfo_unexecuted_blocks=1 00:22:00.649 00:22:00.649 ' 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:00.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.649 --rc genhtml_branch_coverage=1 00:22:00.649 --rc genhtml_function_coverage=1 00:22:00.649 --rc genhtml_legend=1 00:22:00.649 --rc geninfo_all_blocks=1 00:22:00.649 --rc geninfo_unexecuted_blocks=1 00:22:00.649 00:22:00.649 ' 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:00.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.649 --rc genhtml_branch_coverage=1 00:22:00.649 --rc genhtml_function_coverage=1 00:22:00.649 --rc genhtml_legend=1 00:22:00.649 --rc geninfo_all_blocks=1 00:22:00.649 --rc geninfo_unexecuted_blocks=1 00:22:00.649 00:22:00.649 ' 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:00.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.649 --rc genhtml_branch_coverage=1 00:22:00.649 --rc genhtml_function_coverage=1 00:22:00.649 --rc genhtml_legend=1 00:22:00.649 --rc geninfo_all_blocks=1 00:22:00.649 --rc geninfo_unexecuted_blocks=1 00:22:00.649 00:22:00.649 ' 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.649 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:00.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:00.650 04:09:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:03.186 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:03.186 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:03.186 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.186 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:03.187 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:03.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:22:03.187 00:22:03.187 --- 10.0.0.2 ping statistics --- 00:22:03.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.187 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:22:03.187 00:22:03.187 --- 10.0.0.1 ping statistics --- 00:22:03.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.187 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2459985 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2459985 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2459985 ']' 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.187 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.187 [2024-12-10 04:09:57.384047] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:03.187 [2024-12-10 04:09:57.384146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.187 [2024-12-10 04:09:57.455369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:03.187 [2024-12-10 04:09:57.510108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.187 [2024-12-10 04:09:57.510167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.187 [2024-12-10 04:09:57.510195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.187 [2024-12-10 04:09:57.510206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.187 [2024-12-10 04:09:57.510215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.187 [2024-12-10 04:09:57.511665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.187 [2024-12-10 04:09:57.511722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.187 [2024-12-10 04:09:57.511785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.187 [2024-12-10 04:09:57.511788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.445 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.445 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:03.445 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:03.702 [2024-12-10 04:09:57.906096] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.703 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:03.703 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:03.703 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.703 04:09:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:03.961 Malloc1 00:22:03.961 04:09:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:04.219 04:09:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:04.476 04:09:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.734 [2024-12-10 04:09:59.043719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.734 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:04.992 04:09:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:05.250 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:05.250 fio-3.35 00:22:05.250 Starting 1 thread 00:22:07.779 00:22:07.779 test: (groupid=0, jobs=1): err= 0: pid=2460344: Tue Dec 10 04:10:01 2024 00:22:07.779 read: IOPS=8651, BW=33.8MiB/s (35.4MB/s)(67.8MiB/2006msec) 00:22:07.779 slat (nsec): min=1983, max=112214, avg=2669.44, stdev=1766.83 00:22:07.779 clat (usec): min=2357, max=14262, avg=8127.69, stdev=673.41 00:22:07.779 lat (usec): min=2381, max=14264, avg=8130.36, stdev=673.33 00:22:07.779 clat percentiles (usec): 00:22:07.779 | 1.00th=[ 6587], 5.00th=[ 7111], 10.00th=[ 7308], 20.00th=[ 7635], 00:22:07.779 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8291], 00:22:07.779 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 8979], 95.00th=[ 9110], 00:22:07.779 | 99.00th=[ 9503], 99.50th=[ 9765], 99.90th=[13173], 99.95th=[14091], 00:22:07.779 | 99.99th=[14222] 00:22:07.779 bw ( KiB/s): min=33144, max=35184, per=99.83%, avg=34546.00, stdev=942.79, samples=4 00:22:07.779 iops : min= 8286, max= 8796, avg=8636.50, stdev=235.70, samples=4 00:22:07.779 write: IOPS=8641, BW=33.8MiB/s (35.4MB/s)(67.7MiB/2006msec); 0 zone resets 00:22:07.779 slat (usec): min=2, max=101, avg= 2.83, stdev= 1.61 00:22:07.779 clat (usec): min=1004, max=12404, avg=6623.35, stdev=547.79 00:22:07.779 lat (usec): min=1011, max=12407, avg=6626.17, stdev=547.77 00:22:07.779 clat percentiles (usec): 00:22:07.779 | 1.00th=[ 5407], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6194], 00:22:07.779 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6718], 00:22:07.779 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7439], 00:22:07.779 | 99.00th=[ 7832], 99.50th=[ 8029], 99.90th=[ 9896], 99.95th=[10945], 00:22:07.779 | 99.99th=[12387] 00:22:07.779 bw ( KiB/s): min=34184, max=34904, per=100.00%, avg=34578.00, stdev=368.54, samples=4 00:22:07.779 iops : min= 8546, max= 8726, avg=8644.50, stdev=92.14, samples=4 00:22:07.779 lat (msec) : 2=0.03%, 4=0.12%, 10=99.69%, 20=0.16% 00:22:07.779 cpu : usr=65.44%, sys=32.92%, ctx=46, majf=0, minf=35 00:22:07.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:07.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:07.779 issued rwts: total=17354,17335,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:07.779 00:22:07.779 Run status group 0 (all jobs): 00:22:07.779 READ: bw=33.8MiB/s (35.4MB/s), 33.8MiB/s-33.8MiB/s (35.4MB/s-35.4MB/s), io=67.8MiB (71.1MB), run=2006-2006msec 00:22:07.779 WRITE: bw=33.8MiB/s (35.4MB/s), 33.8MiB/s-33.8MiB/s (35.4MB/s-35.4MB/s), io=67.7MiB (71.0MB), run=2006-2006msec 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:07.779 04:10:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:07.779 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:07.779 fio-3.35 00:22:07.779 Starting 1 thread 00:22:09.679 [2024-12-10 04:10:03.820987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1738b90 is same with the state(6) to be set 00:22:09.679 [2024-12-10 04:10:03.821080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1738b90 is same with the state(6) to be set 00:22:10.244 00:22:10.244 test: (groupid=0, jobs=1): err= 0: pid=2460745: Tue Dec 10 04:10:04 2024 00:22:10.244 read: IOPS=7997, BW=125MiB/s (131MB/s)(251MiB/2005msec) 00:22:10.244 slat (usec): min=2, max=119, avg= 3.74, stdev= 1.90 00:22:10.244 clat (usec): min=2409, max=52710, avg=9254.17, stdev=3988.13 00:22:10.244 lat (usec): min=2412, max=52714, avg=9257.90, stdev=3988.16 00:22:10.244 clat percentiles (usec): 00:22:10.244 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 7242], 00:22:10.244 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9372], 00:22:10.244 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11600], 95.00th=[12649], 00:22:10.244 | 99.00th=[16909], 99.50th=[45876], 99.90th=[51643], 99.95th=[52167], 00:22:10.244 | 99.99th=[52691] 00:22:10.244 bw ( KiB/s): min=56448, max=76608, per=50.77%, avg=64960.00, stdev=8437.45, samples=4 00:22:10.244 iops : min= 3528, max= 4788, avg=4060.00, stdev=527.34, samples=4 00:22:10.244 write: IOPS=4870, BW=76.1MiB/s (79.8MB/s)(134MiB/1756msec); 0 zone resets 00:22:10.244 slat (usec): min=30, max=193, avg=33.59, stdev= 5.71 00:22:10.244 clat (usec): min=4533, max=18532, avg=11785.03, stdev=1881.10 00:22:10.244 lat (usec): min=4564, max=18581, avg=11818.62, stdev=1881.41 00:22:10.244 clat percentiles (usec): 00:22:10.244 | 1.00th=[ 7767], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10159], 00:22:10.244 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11731], 60.00th=[12256], 00:22:10.244 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14353], 95.00th=[15008], 00:22:10.244 | 99.00th=[16319], 99.50th=[16712], 99.90th=[17433], 99.95th=[17957], 00:22:10.244 | 99.99th=[18482] 00:22:10.244 bw ( KiB/s): min=59488, max=79136, per=87.20%, avg=67952.00, stdev=8206.22, samples=4 00:22:10.244 iops : min= 3718, max= 4946, avg=4247.00, stdev=512.89, samples=4 00:22:10.244 lat (msec) : 4=0.06%, 10=54.25%, 20=45.18%, 50=0.39%, 100=0.12% 00:22:10.244 cpu : usr=76.50%, sys=22.36%, ctx=56, majf=0, minf=70 00:22:10.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:10.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:10.244 issued rwts: total=16035,8552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.244 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:10.244 00:22:10.244 Run status group 0 (all jobs): 00:22:10.244 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=251MiB (263MB), run=2005-2005msec 00:22:10.244 WRITE: bw=76.1MiB/s (79.8MB/s), 76.1MiB/s-76.1MiB/s (79.8MB/s-79.8MB/s), io=134MiB (140MB), run=1756-1756msec 00:22:10.244 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:10.502 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:10.502 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:10.502 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:10.502 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:10.502 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:10.502 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:10.502 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:10.502 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:10.502 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:10.502 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:10.502 rmmod nvme_tcp 00:22:10.502 rmmod nvme_fabrics 00:22:10.502 rmmod nvme_keyring 00:22:10.502 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:10.502 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:10.502 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:10.503 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2459985 ']' 00:22:10.503 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2459985 00:22:10.503 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2459985 ']' 00:22:10.503 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2459985 00:22:10.503 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:10.503 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.503 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2459985 00:22:10.503 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:10.503 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:10.503 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2459985' 00:22:10.503 killing process with pid 2459985 00:22:10.503 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2459985 00:22:10.503 04:10:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2459985 00:22:10.762 04:10:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:10.762 04:10:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:10.762 04:10:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:10.762 04:10:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:10.762 04:10:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:10.762 04:10:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:10.762 04:10:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:10.762 04:10:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:10.762 04:10:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:10.762 04:10:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.762 04:10:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.762 04:10:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:13.300 00:22:13.300 real 0m12.266s 00:22:13.300 user 0m35.747s 00:22:13.300 sys 0m4.093s 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.300 ************************************ 00:22:13.300 END TEST nvmf_fio_host 00:22:13.300 ************************************ 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.300 ************************************ 00:22:13.300 START TEST nvmf_failover 00:22:13.300 ************************************ 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:13.300 * Looking for test storage... 00:22:13.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:13.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.300 --rc genhtml_branch_coverage=1 00:22:13.300 --rc genhtml_function_coverage=1 00:22:13.300 --rc genhtml_legend=1 00:22:13.300 --rc geninfo_all_blocks=1 00:22:13.300 --rc geninfo_unexecuted_blocks=1 00:22:13.300 00:22:13.300 ' 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:13.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.300 --rc genhtml_branch_coverage=1 00:22:13.300 --rc genhtml_function_coverage=1 00:22:13.300 --rc genhtml_legend=1 00:22:13.300 --rc geninfo_all_blocks=1 00:22:13.300 --rc geninfo_unexecuted_blocks=1 00:22:13.300 00:22:13.300 ' 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:13.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.300 --rc genhtml_branch_coverage=1 00:22:13.300 --rc genhtml_function_coverage=1 00:22:13.300 --rc genhtml_legend=1 00:22:13.300 --rc geninfo_all_blocks=1 00:22:13.300 --rc geninfo_unexecuted_blocks=1 00:22:13.300 00:22:13.300 ' 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:13.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.300 --rc genhtml_branch_coverage=1 00:22:13.300 --rc genhtml_function_coverage=1 00:22:13.300 --rc genhtml_legend=1 00:22:13.300 --rc geninfo_all_blocks=1 00:22:13.300 --rc geninfo_unexecuted_blocks=1 00:22:13.300 00:22:13.300 ' 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.300 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:13.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:13.301 04:10:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:15.202 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:15.202 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:15.202 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:15.202 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:15.202 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:15.203 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.203 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.461 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.461 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.461 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:15.461 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.461 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.461 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.461 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:15.461 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:15.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:22:15.461 00:22:15.461 --- 10.0.0.2 ping statistics --- 00:22:15.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.461 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:22:15.461 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:22:15.462 00:22:15.462 --- 10.0.0.1 ping statistics --- 00:22:15.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.462 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2462998 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2462998 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2462998 ']' 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.462 04:10:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:15.462 [2024-12-10 04:10:09.772154] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:15.462 [2024-12-10 04:10:09.772235] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.720 [2024-12-10 04:10:09.846305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:15.720 [2024-12-10 04:10:09.903512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.720 [2024-12-10 04:10:09.903569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.720 [2024-12-10 04:10:09.903591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.720 [2024-12-10 04:10:09.903602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.720 [2024-12-10 04:10:09.903612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.720 [2024-12-10 04:10:09.905089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.720 [2024-12-10 04:10:09.905150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.720 [2024-12-10 04:10:09.905153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.720 04:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.720 04:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:15.720 04:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:15.720 04:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:15.720 04:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:15.720 04:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.720 04:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:15.978 [2024-12-10 04:10:10.360725] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.236 04:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:16.494 Malloc0 00:22:16.494 04:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:16.752 04:10:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:17.010 04:10:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.268 [2024-12-10 04:10:11.512945] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.268 04:10:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:17.525 [2024-12-10 04:10:11.785888] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:17.525 04:10:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:17.783 [2024-12-10 04:10:12.062788] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:17.783 04:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2463296 00:22:17.783 04:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:17.783 04:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:17.783 04:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2463296 /var/tmp/bdevperf.sock 00:22:17.783 04:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2463296 ']' 00:22:17.783 04:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.783 04:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.783 04:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.783 04:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.783 04:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:18.046 04:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.046 04:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:18.046 04:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:18.614 NVMe0n1 00:22:18.614 04:10:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:19.182 00:22:19.182 04:10:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2463428 00:22:19.182 04:10:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:19.182 04:10:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:20.115 04:10:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:20.374 [2024-12-10 04:10:14.600247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.600991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 [2024-12-10 04:10:14.601425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f8e00 is same with the state(6) to be set 00:22:20.374 04:10:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:23.655 04:10:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:23.913 00:22:23.913 04:10:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:24.171 [2024-12-10 04:10:18.440887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f98b0 is same with the state(6) to be set 00:22:24.171 [2024-12-10 04:10:18.440931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f98b0 is same with the state(6) to be set 00:22:24.171 [2024-12-10 04:10:18.440946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f98b0 is same with the state(6) to be set 00:22:24.171 [2024-12-10 04:10:18.440959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f98b0 is same with the state(6) to be set 00:22:24.171 [2024-12-10 04:10:18.440987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f98b0 is same with the state(6) to be set 00:22:24.171 [2024-12-10 04:10:18.440999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f98b0 is same with the state(6) to be set 00:22:24.171 [2024-12-10 04:10:18.441011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f98b0 is same with the state(6) to be set 00:22:24.171 04:10:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:27.555 04:10:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:27.555 [2024-12-10 04:10:21.768610] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.555 04:10:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:28.489 04:10:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:28.747 [2024-12-10 04:10:23.040092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.747 [2024-12-10 04:10:23.040202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.747 [2024-12-10 04:10:23.040218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.747 [2024-12-10 04:10:23.040231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.747 [2024-12-10 04:10:23.040243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.747 [2024-12-10 04:10:23.040254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.747 [2024-12-10 04:10:23.040266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.747 [2024-12-10 04:10:23.040277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.747 [2024-12-10 04:10:23.040289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.747 [2024-12-10 04:10:23.040300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.747 [2024-12-10 04:10:23.040312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.747 [2024-12-10 04:10:23.040324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.747 [2024-12-10 04:10:23.040336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.747 [2024-12-10 04:10:23.040347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.747 [2024-12-10 04:10:23.040359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 [2024-12-10 04:10:23.040744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beee0 is same with the state(6) to be set 00:22:28.748 04:10:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2463428 00:22:35.313 { 00:22:35.313 "results": [ 00:22:35.313 { 00:22:35.313 "job": "NVMe0n1", 00:22:35.313 "core_mask": "0x1", 00:22:35.313 "workload": "verify", 00:22:35.313 "status": "finished", 00:22:35.313 "verify_range": { 00:22:35.313 "start": 0, 00:22:35.313 "length": 16384 00:22:35.313 }, 00:22:35.313 "queue_depth": 128, 00:22:35.313 "io_size": 4096, 00:22:35.313 "runtime": 15.008794, 00:22:35.313 "iops": 8543.857687699625, 00:22:35.313 "mibps": 33.37444409257666, 00:22:35.313 "io_failed": 4797, 00:22:35.313 "io_timeout": 0, 00:22:35.313 "avg_latency_us": 14413.274945033285, 00:22:35.314 "min_latency_us": 555.2355555555556, 00:22:35.314 "max_latency_us": 16893.724444444444 00:22:35.314 } 00:22:35.314 ], 00:22:35.314 "core_count": 1 00:22:35.314 } 00:22:35.314 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2463296 00:22:35.314 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2463296 ']' 00:22:35.314 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2463296 00:22:35.314 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:35.314 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.314 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2463296 00:22:35.314 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.314 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.314 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2463296' 00:22:35.314 killing process with pid 2463296 00:22:35.314 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2463296 00:22:35.314 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2463296 00:22:35.314 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:35.314 [2024-12-10 04:10:12.130708] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:35.314 [2024-12-10 04:10:12.130792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2463296 ] 00:22:35.314 [2024-12-10 04:10:12.199512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.314 [2024-12-10 04:10:12.259928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.314 Running I/O for 15 seconds... 00:22:35.314 8634.00 IOPS, 33.73 MiB/s [2024-12-10T03:10:29.703Z] [2024-12-10 04:10:14.602367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.602986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.602999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.603013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.603026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.603040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.603054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.603068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.603081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.603095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.603128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.603144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.603159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.603174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.603187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.603202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.314 [2024-12-10 04:10:14.603216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.603231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.314 [2024-12-10 04:10:14.603244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.603259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.603273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.603288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.603301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.603316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.603329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.603344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.314 [2024-12-10 04:10:14.603358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.314 [2024-12-10 04:10:14.603373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.603979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.603993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.604007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.604035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.604063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.604091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.604119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.604146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.604175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.604203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.604231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.604262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.604290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.604325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.604354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.604382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.315 [2024-12-10 04:10:14.604411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.315 [2024-12-10 04:10:14.604438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.315 [2024-12-10 04:10:14.604466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.315 [2024-12-10 04:10:14.604494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.315 [2024-12-10 04:10:14.604521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.315 [2024-12-10 04:10:14.604596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.315 [2024-12-10 04:10:14.604615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.315 [2024-12-10 04:10:14.604629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.604644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.316 [2024-12-10 04:10:14.604657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.604676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.604691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.604705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.604719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.604734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.604747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.604762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.604776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.604791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.604805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.604820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.604841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.604872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.604886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.604900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.604915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.604930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.604943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.604958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.604971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.604986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.604999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.316 [2024-12-10 04:10:14.605623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.316 [2024-12-10 04:10:14.605671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82768 len:8 PRP1 0x0 PRP2 0x0 00:22:35.316 [2024-12-10 04:10:14.605685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.316 [2024-12-10 04:10:14.605715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.316 [2024-12-10 04:10:14.605727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82776 len:8 PRP1 0x0 PRP2 0x0 00:22:35.316 [2024-12-10 04:10:14.605740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.316 [2024-12-10 04:10:14.605764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.316 [2024-12-10 04:10:14.605775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82784 len:8 PRP1 0x0 PRP2 0x0 00:22:35.316 [2024-12-10 04:10:14.605788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.316 [2024-12-10 04:10:14.605812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.316 [2024-12-10 04:10:14.605823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82792 len:8 PRP1 0x0 PRP2 0x0 00:22:35.316 [2024-12-10 04:10:14.605839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.316 [2024-12-10 04:10:14.605869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.316 [2024-12-10 04:10:14.605879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.316 [2024-12-10 04:10:14.605890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82800 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.605902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.605915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.605925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.605935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81864 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.605948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.605960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.605971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.605981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81872 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.605993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.606023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.606034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81880 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.606046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.606068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.606079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81888 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.606090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.606113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.606124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81896 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.606136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.606159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.606169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81904 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.606181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.606205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.606219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81912 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.606231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.606254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.606265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.606277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.606300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.606310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82808 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.606323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.606346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.606356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81928 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.606369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.606393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.606403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81936 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.606416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.606438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.606448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81944 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.606460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.606483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.606493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81952 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.606505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.606528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.606538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81960 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.606579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.606617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.606629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81968 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.606641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.606665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.606676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81976 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.606688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.317 [2024-12-10 04:10:14.606711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.317 [2024-12-10 04:10:14.606722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81984 len:8 PRP1 0x0 PRP2 0x0 00:22:35.317 [2024-12-10 04:10:14.606734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606805] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:35.317 [2024-12-10 04:10:14.606845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.317 [2024-12-10 04:10:14.606863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.317 [2024-12-10 04:10:14.606891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.317 [2024-12-10 04:10:14.606917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.317 [2024-12-10 04:10:14.606943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:14.606956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:35.317 [2024-12-10 04:10:14.607007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x586180 (9): Bad file descriptor 00:22:35.317 [2024-12-10 04:10:14.610327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:35.317 [2024-12-10 04:10:14.632119] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:35.317 8480.00 IOPS, 33.12 MiB/s [2024-12-10T03:10:29.706Z] 8496.00 IOPS, 33.19 MiB/s [2024-12-10T03:10:29.706Z] 8538.00 IOPS, 33.35 MiB/s [2024-12-10T03:10:29.706Z] 8546.80 IOPS, 33.39 MiB/s [2024-12-10T03:10:29.706Z] [2024-12-10 04:10:18.440800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.317 [2024-12-10 04:10:18.440857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:18.440876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.317 [2024-12-10 04:10:18.440903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:18.440919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.317 [2024-12-10 04:10:18.440932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:18.440947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.317 [2024-12-10 04:10:18.440976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.317 [2024-12-10 04:10:18.440990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586180 is same with the state(6) to be set 00:22:35.318 [2024-12-10 04:10:18.441295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.318 [2024-12-10 04:10:18.441815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.318 [2024-12-10 04:10:18.441845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.318 [2024-12-10 04:10:18.441888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.441985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.441999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.442013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.442030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.442045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.442058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.442072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.442085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.442100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.442113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.442127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.442141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.442155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.442168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.442183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.442196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.442210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.442223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.442238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.442251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.442265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.442279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.442293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.442306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.442320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.318 [2024-12-10 04:10:18.442333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.318 [2024-12-10 04:10:18.442348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.442974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.442987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.443014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.319 [2024-12-10 04:10:18.443478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.319 [2024-12-10 04:10:18.443576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.319 [2024-12-10 04:10:18.443592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.443608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.443622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.443637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.443651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.443666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.443680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.443696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.443710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.443726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.443739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.443754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.443768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.443783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.443797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.443813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.443826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.443841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.443855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.443886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.443899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.443914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.443930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.443946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.443959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.443974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.443988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.320 [2024-12-10 04:10:18.444823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.320 [2024-12-10 04:10:18.444838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:18.444853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:18.444882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:18.444898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:18.444912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:18.444927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:18.444941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:18.444956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:18.444970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:18.444985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:18.444999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:18.445014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:18.445027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:18.445042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:18.445056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:18.445070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:18.445087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:18.445102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:18.445116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:18.445130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:18.445144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:18.445159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:18.445172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:18.445200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.321 [2024-12-10 04:10:18.445214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.321 [2024-12-10 04:10:18.445225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89648 len:8 PRP1 0x0 PRP2 0x0 00:22:35.321 [2024-12-10 04:10:18.445238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:18.445303] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:35.321 [2024-12-10 04:10:18.445322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:35.321 [2024-12-10 04:10:18.448678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:35.321 [2024-12-10 04:10:18.448720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x586180 (9): Bad file descriptor 00:22:35.321 [2024-12-10 04:10:18.512146] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:35.321 8463.33 IOPS, 33.06 MiB/s [2024-12-10T03:10:29.710Z] 8488.00 IOPS, 33.16 MiB/s [2024-12-10T03:10:29.710Z] 8521.75 IOPS, 33.29 MiB/s [2024-12-10T03:10:29.710Z] 8521.89 IOPS, 33.29 MiB/s [2024-12-10T03:10:29.710Z] [2024-12-10 04:10:23.041388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.041974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.041988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.042002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.042016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.042029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.042043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.042056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.042070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.042084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.042098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.042111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.042126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.042139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.042153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.321 [2024-12-10 04:10:23.042166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.321 [2024-12-10 04:10:23.042180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.322 [2024-12-10 04:10:23.042193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.322 [2024-12-10 04:10:23.042220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.322 [2024-12-10 04:10:23.042247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.322 [2024-12-10 04:10:23.042274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.322 [2024-12-10 04:10:23.042302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.322 [2024-12-10 04:10:23.042333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.322 [2024-12-10 04:10:23.042361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.322 [2024-12-10 04:10:23.042388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.042978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.042993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.043006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.043020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.043033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.043052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.043065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.043080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.043093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.043108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.043121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.043136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.043149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.043163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.043176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.043191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.322 [2024-12-10 04:10:23.043204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.322 [2024-12-10 04:10:23.043219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.323 [2024-12-10 04:10:23.043550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.323 [2024-12-10 04:10:23.043599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.323 [2024-12-10 04:10:23.043627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.323 [2024-12-10 04:10:23.043656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.323 [2024-12-10 04:10:23.043684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.323 [2024-12-10 04:10:23.043713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.323 [2024-12-10 04:10:23.043742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.323 [2024-12-10 04:10:23.043771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.043975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.043988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.044003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.044017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.044038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.044053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.044068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.044082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.044097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.044111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.044125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.044139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.044154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.044168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.044186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.044200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.044215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.044229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.044244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.044258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.044281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.044296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.044311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.044325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.044340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.044354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.044369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.044382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.044397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.044410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.044425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.323 [2024-12-10 04:10:23.044438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.323 [2024-12-10 04:10:23.044453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.044467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.044482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.044496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.044511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.044525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.044541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.044582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.044600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.044614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.044630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.044645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.044660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.044675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.044690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.044704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.044720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.044734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.044749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.044763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.044785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.324 [2024-12-10 04:10:23.044800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.044816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.044830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.044846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.044873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.044889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.044903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.044918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.044932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.044947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.044960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.044975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.044993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.045009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.045023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.045038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.045052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.045067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.045081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.045096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.045110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.045125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.045139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.045154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.045168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.045182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.045196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.045211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.324 [2024-12-10 04:10:23.045224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.045255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.324 [2024-12-10 04:10:23.045270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.324 [2024-12-10 04:10:23.045288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31000 len:8 PRP1 0x0 PRP2 0x0 00:22:35.324 [2024-12-10 04:10:23.045301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.045376] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:35.324 [2024-12-10 04:10:23.045429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.324 [2024-12-10 04:10:23.045448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.045464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.324 [2024-12-10 04:10:23.045478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.045496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.324 [2024-12-10 04:10:23.045510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.045524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.324 [2024-12-10 04:10:23.045537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.324 [2024-12-10 04:10:23.045558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:35.324 [2024-12-10 04:10:23.048888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:35.324 [2024-12-10 04:10:23.048929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x586180 (9): Bad file descriptor 00:22:35.324 [2024-12-10 04:10:23.071878] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:35.324 8499.10 IOPS, 33.20 MiB/s [2024-12-10T03:10:29.713Z] 8510.00 IOPS, 33.24 MiB/s [2024-12-10T03:10:29.713Z] 8520.08 IOPS, 33.28 MiB/s [2024-12-10T03:10:29.713Z] 8524.77 IOPS, 33.30 MiB/s [2024-12-10T03:10:29.713Z] 8532.50 IOPS, 33.33 MiB/s [2024-12-10T03:10:29.713Z] 8540.33 IOPS, 33.36 MiB/s 00:22:35.324 Latency(us) 00:22:35.324 [2024-12-10T03:10:29.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.324 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:35.324 Verification LBA range: start 0x0 length 0x4000 00:22:35.324 NVMe0n1 : 15.01 8543.86 33.37 319.61 0.00 14413.27 555.24 16893.72 00:22:35.324 [2024-12-10T03:10:29.713Z] =================================================================================================================== 00:22:35.324 [2024-12-10T03:10:29.713Z] Total : 8543.86 33.37 319.61 0.00 14413.27 555.24 16893.72 00:22:35.324 Received shutdown signal, test time was about 15.000000 seconds 00:22:35.324 00:22:35.324 Latency(us) 00:22:35.324 [2024-12-10T03:10:29.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.324 [2024-12-10T03:10:29.713Z] =================================================================================================================== 00:22:35.324 [2024-12-10T03:10:29.713Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.324 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:35.324 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:35.324 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:35.324 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2465269 00:22:35.324 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:35.324 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2465269 /var/tmp/bdevperf.sock 00:22:35.324 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2465269 ']' 00:22:35.325 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.325 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.325 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.325 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.325 04:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:35.325 04:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.325 04:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:35.325 04:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:35.325 [2024-12-10 04:10:29.291474] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:35.325 04:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:35.325 [2024-12-10 04:10:29.580193] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:35.325 04:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:35.583 NVMe0n1 00:22:35.841 04:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:36.098 00:22:36.098 04:10:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:36.356 00:22:36.613 04:10:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:36.613 04:10:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:36.871 04:10:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:37.128 04:10:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:40.407 04:10:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:40.407 04:10:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:40.407 04:10:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2465939 00:22:40.407 04:10:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:40.407 04:10:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2465939 00:22:41.779 { 00:22:41.779 "results": [ 00:22:41.779 { 00:22:41.779 "job": "NVMe0n1", 00:22:41.779 "core_mask": "0x1", 00:22:41.779 "workload": "verify", 00:22:41.779 "status": "finished", 00:22:41.779 "verify_range": { 00:22:41.779 "start": 0, 00:22:41.779 "length": 16384 00:22:41.779 }, 00:22:41.779 "queue_depth": 128, 00:22:41.779 "io_size": 4096, 00:22:41.779 "runtime": 1.009149, 00:22:41.779 "iops": 8510.140722529577, 00:22:41.779 "mibps": 33.24273719738116, 00:22:41.779 "io_failed": 0, 00:22:41.779 "io_timeout": 0, 00:22:41.779 "avg_latency_us": 14973.205723058878, 00:22:41.779 "min_latency_us": 2342.305185185185, 00:22:41.779 "max_latency_us": 13107.2 00:22:41.779 } 00:22:41.779 ], 00:22:41.779 "core_count": 1 00:22:41.779 } 00:22:41.779 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:41.779 [2024-12-10 04:10:28.803504] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:41.779 [2024-12-10 04:10:28.803617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465269 ] 00:22:41.779 [2024-12-10 04:10:28.871339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.779 [2024-12-10 04:10:28.927278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.779 [2024-12-10 04:10:31.263814] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:41.779 [2024-12-10 04:10:31.263917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.779 [2024-12-10 04:10:31.263941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.779 [2024-12-10 04:10:31.263969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.779 [2024-12-10 04:10:31.263983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.779 [2024-12-10 04:10:31.263996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.779 [2024-12-10 04:10:31.264010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.780 [2024-12-10 04:10:31.264024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.780 [2024-12-10 04:10:31.264037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.780 [2024-12-10 04:10:31.264051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:41.780 [2024-12-10 04:10:31.264098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:41.780 [2024-12-10 04:10:31.264139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd7180 (9): Bad file descriptor 00:22:41.780 [2024-12-10 04:10:31.277011] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:41.780 Running I/O for 1 seconds... 00:22:41.780 8460.00 IOPS, 33.05 MiB/s 00:22:41.780 Latency(us) 00:22:41.780 [2024-12-10T03:10:36.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.780 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:41.780 Verification LBA range: start 0x0 length 0x4000 00:22:41.780 NVMe0n1 : 1.01 8510.14 33.24 0.00 0.00 14973.21 2342.31 13107.20 00:22:41.780 [2024-12-10T03:10:36.169Z] =================================================================================================================== 00:22:41.780 [2024-12-10T03:10:36.169Z] Total : 8510.14 33.24 0.00 0.00 14973.21 2342.31 13107.20 00:22:41.780 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:41.780 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:41.780 04:10:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:42.037 04:10:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:42.037 04:10:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:42.295 04:10:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:42.553 04:10:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:45.832 04:10:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:45.832 04:10:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:45.832 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2465269 00:22:45.832 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2465269 ']' 00:22:45.832 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2465269 00:22:45.832 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:45.832 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.832 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2465269 00:22:45.832 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:45.832 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:45.832 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2465269' 00:22:45.832 killing process with pid 2465269 00:22:45.832 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2465269 00:22:45.832 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2465269 00:22:46.090 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:46.090 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:46.348 rmmod nvme_tcp 00:22:46.348 rmmod nvme_fabrics 00:22:46.348 rmmod nvme_keyring 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2462998 ']' 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2462998 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2462998 ']' 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2462998 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.348 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2462998 00:22:46.606 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:46.606 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:46.606 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2462998' 00:22:46.606 killing process with pid 2462998 00:22:46.606 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2462998 00:22:46.606 04:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2462998 00:22:46.864 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:46.864 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:46.864 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:46.864 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:46.864 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:46.864 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:46.864 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:46.864 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:46.864 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:46.864 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.864 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.864 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.767 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:48.768 00:22:48.768 real 0m35.923s 00:22:48.768 user 2m6.881s 00:22:48.768 sys 0m5.781s 00:22:48.768 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.768 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:48.768 ************************************ 00:22:48.768 END TEST nvmf_failover 00:22:48.768 ************************************ 00:22:48.768 04:10:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:48.768 04:10:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:48.768 04:10:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.768 04:10:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.768 ************************************ 00:22:48.768 START TEST nvmf_host_discovery 00:22:48.768 ************************************ 00:22:48.768 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:49.027 * Looking for test storage... 00:22:49.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:49.027 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:49.027 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:22:49.027 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:49.027 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:49.027 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.027 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.027 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.027 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.027 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.027 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.027 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.027 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.027 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.027 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.027 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:49.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.028 --rc genhtml_branch_coverage=1 00:22:49.028 --rc genhtml_function_coverage=1 00:22:49.028 --rc genhtml_legend=1 00:22:49.028 --rc geninfo_all_blocks=1 00:22:49.028 --rc geninfo_unexecuted_blocks=1 00:22:49.028 00:22:49.028 ' 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:49.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.028 --rc genhtml_branch_coverage=1 00:22:49.028 --rc genhtml_function_coverage=1 00:22:49.028 --rc genhtml_legend=1 00:22:49.028 --rc geninfo_all_blocks=1 00:22:49.028 --rc geninfo_unexecuted_blocks=1 00:22:49.028 00:22:49.028 ' 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:49.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.028 --rc genhtml_branch_coverage=1 00:22:49.028 --rc genhtml_function_coverage=1 00:22:49.028 --rc genhtml_legend=1 00:22:49.028 --rc geninfo_all_blocks=1 00:22:49.028 --rc geninfo_unexecuted_blocks=1 00:22:49.028 00:22:49.028 ' 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:49.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.028 --rc genhtml_branch_coverage=1 00:22:49.028 --rc genhtml_function_coverage=1 00:22:49.028 --rc genhtml_legend=1 00:22:49.028 --rc geninfo_all_blocks=1 00:22:49.028 --rc geninfo_unexecuted_blocks=1 00:22:49.028 00:22:49.028 ' 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:49.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.028 04:10:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:51.563 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:51.563 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:51.563 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:51.563 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.563 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:51.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:22:51.564 00:22:51.564 --- 10.0.0.2 ping statistics --- 00:22:51.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.564 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:22:51.564 00:22:51.564 --- 10.0.0.1 ping statistics --- 00:22:51.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.564 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2468694 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2468694 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2468694 ']' 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.564 [2024-12-10 04:10:45.671262] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:51.564 [2024-12-10 04:10:45.671349] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.564 [2024-12-10 04:10:45.743499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.564 [2024-12-10 04:10:45.799321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.564 [2024-12-10 04:10:45.799388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.564 [2024-12-10 04:10:45.799417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.564 [2024-12-10 04:10:45.799429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.564 [2024-12-10 04:10:45.799438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.564 [2024-12-10 04:10:45.800084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.564 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.822 [2024-12-10 04:10:45.947609] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.822 [2024-12-10 04:10:45.955871] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.822 null0 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.822 null1 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2468714 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2468714 /tmp/host.sock 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2468714 ']' 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:51.822 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.822 04:10:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.822 [2024-12-10 04:10:46.030775] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:51.822 [2024-12-10 04:10:46.030865] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2468714 ] 00:22:51.822 [2024-12-10 04:10:46.096966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.822 [2024-12-10 04:10:46.153967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.081 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:52.338 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.339 [2024-12-10 04:10:46.549386] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.339 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:52.596 04:10:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:53.162 [2024-12-10 04:10:47.335689] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:53.162 [2024-12-10 04:10:47.335724] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:53.162 [2024-12-10 04:10:47.335750] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:53.162 [2024-12-10 04:10:47.422050] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:53.162 [2024-12-10 04:10:47.483844] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:22:53.162 [2024-12-10 04:10:47.484886] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf49aa0:1 started. 00:22:53.162 [2024-12-10 04:10:47.486699] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:53.162 [2024-12-10 04:10:47.486721] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:53.162 [2024-12-10 04:10:47.493536] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf49aa0 was disconnected and freed. delete nvme_qpair. 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:53.420 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.678 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.679 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:53.679 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.679 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:53.679 04:10:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:53.936 [2024-12-10 04:10:48.112756] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf49c80:1 started. 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.936 [2024-12-10 04:10:48.157205] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf49c80 was disconnected and freed. delete nvme_qpair. 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:53.936 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.869 [2024-12-10 04:10:49.233223] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:54.869 [2024-12-10 04:10:49.234242] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:54.869 [2024-12-10 04:10:49.234283] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:54.869 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:54.870 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.870 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:54.870 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.870 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:54.870 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.127 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.127 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:55.127 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:55.127 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:55.127 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:55.127 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:55.127 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:55.127 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.128 [2024-12-10 04:10:49.320445] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:55.128 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:55.385 [2024-12-10 04:10:49.626170] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:22:55.385 [2024-12-10 04:10:49.626248] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:55.385 [2024-12-10 04:10:49.626265] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:55.385 [2024-12-10 04:10:49.626288] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.358 [2024-12-10 04:10:50.465643] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:56.358 [2024-12-10 04:10:50.465698] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.358 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:56.358 [2024-12-10 04:10:50.470525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.358 [2024-12-10 04:10:50.470565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.358 [2024-12-10 04:10:50.470599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.359 [2024-12-10 04:10:50.470613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.359 [2024-12-10 04:10:50.470627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.359 [2024-12-10 04:10:50.470656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.359 [2024-12-10 04:10:50.470671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.359 [2024-12-10 04:10:50.470684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.359 [2024-12-10 04:10:50.470697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a050 is same with the state(6) to be set 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:56.359 [2024-12-10 04:10:50.480513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a050 (9): Bad file descriptor 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.359 [2024-12-10 04:10:50.490562] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:56.359 [2024-12-10 04:10:50.490585] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:56.359 [2024-12-10 04:10:50.490598] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:56.359 [2024-12-10 04:10:50.490608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:56.359 [2024-12-10 04:10:50.490644] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:56.359 [2024-12-10 04:10:50.490815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.359 [2024-12-10 04:10:50.490843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a050 with addr=10.0.0.2, port=4420 00:22:56.359 [2024-12-10 04:10:50.490860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a050 is same with the state(6) to be set 00:22:56.359 [2024-12-10 04:10:50.490900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a050 (9): Bad file descriptor 00:22:56.359 [2024-12-10 04:10:50.490949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:56.359 [2024-12-10 04:10:50.490974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:56.359 [2024-12-10 04:10:50.490993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:56.359 [2024-12-10 04:10:50.491006] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:56.359 [2024-12-10 04:10:50.491016] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:56.359 [2024-12-10 04:10:50.491024] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:56.359 [2024-12-10 04:10:50.500678] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:56.359 [2024-12-10 04:10:50.500698] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:56.359 [2024-12-10 04:10:50.500707] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:56.359 [2024-12-10 04:10:50.500714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:56.359 [2024-12-10 04:10:50.500738] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:56.359 [2024-12-10 04:10:50.500946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.359 [2024-12-10 04:10:50.500973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a050 with addr=10.0.0.2, port=4420 00:22:56.359 [2024-12-10 04:10:50.500989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a050 is same with the state(6) to be set 00:22:56.359 [2024-12-10 04:10:50.501011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a050 (9): Bad file descriptor 00:22:56.359 [2024-12-10 04:10:50.501056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:56.359 [2024-12-10 04:10:50.501075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:56.359 [2024-12-10 04:10:50.501089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:56.359 [2024-12-10 04:10:50.501101] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:56.359 [2024-12-10 04:10:50.501110] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:56.359 [2024-12-10 04:10:50.501117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:56.359 [2024-12-10 04:10:50.510772] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:56.359 [2024-12-10 04:10:50.510795] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:56.359 [2024-12-10 04:10:50.510804] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:56.359 [2024-12-10 04:10:50.510811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:56.359 [2024-12-10 04:10:50.510837] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:56.359 [2024-12-10 04:10:50.510976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.359 [2024-12-10 04:10:50.511005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a050 with addr=10.0.0.2, port=4420 00:22:56.359 [2024-12-10 04:10:50.511021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a050 is same with the state(6) to be set 00:22:56.359 [2024-12-10 04:10:50.511043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a050 (9): Bad file descriptor 00:22:56.359 [2024-12-10 04:10:50.511070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:56.359 [2024-12-10 04:10:50.511086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:56.359 [2024-12-10 04:10:50.511099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:56.359 [2024-12-10 04:10:50.511110] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:56.359 [2024-12-10 04:10:50.511119] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:56.359 [2024-12-10 04:10:50.511127] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.359 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:56.359 [2024-12-10 04:10:50.520871] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:56.359 [2024-12-10 04:10:50.520907] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:56.359 [2024-12-10 04:10:50.520916] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:56.359 [2024-12-10 04:10:50.520923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:56.359 [2024-12-10 04:10:50.520946] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:56.359 [2024-12-10 04:10:50.521111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.359 [2024-12-10 04:10:50.521138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a050 with addr=10.0.0.2, port=4420 00:22:56.359 [2024-12-10 04:10:50.521154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a050 is same with the state(6) to be set 00:22:56.359 [2024-12-10 04:10:50.521176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a050 (9): Bad file descriptor 00:22:56.359 [2024-12-10 04:10:50.522087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:56.359 [2024-12-10 04:10:50.522108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:56.359 [2024-12-10 04:10:50.522121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:56.359 [2024-12-10 04:10:50.522155] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:56.359 [2024-12-10 04:10:50.522165] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:56.359 [2024-12-10 04:10:50.522172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:56.360 [2024-12-10 04:10:50.530981] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:56.360 [2024-12-10 04:10:50.531001] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:56.360 [2024-12-10 04:10:50.531010] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:56.360 [2024-12-10 04:10:50.531017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:56.360 [2024-12-10 04:10:50.531056] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:56.360 [2024-12-10 04:10:50.531273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.360 [2024-12-10 04:10:50.531301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a050 with addr=10.0.0.2, port=4420 00:22:56.360 [2024-12-10 04:10:50.531317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a050 is same with the state(6) to be set 00:22:56.360 [2024-12-10 04:10:50.531340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a050 (9): Bad file descriptor 00:22:56.360 [2024-12-10 04:10:50.531372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:56.360 [2024-12-10 04:10:50.531389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:56.360 [2024-12-10 04:10:50.531402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:56.360 [2024-12-10 04:10:50.531414] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:56.360 [2024-12-10 04:10:50.531422] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:56.360 [2024-12-10 04:10:50.531430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:56.360 [2024-12-10 04:10:50.541091] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:56.360 [2024-12-10 04:10:50.541111] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:56.360 [2024-12-10 04:10:50.541120] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:56.360 [2024-12-10 04:10:50.541127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:56.360 [2024-12-10 04:10:50.541166] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:56.360 [2024-12-10 04:10:50.541294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.360 [2024-12-10 04:10:50.541322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a050 with addr=10.0.0.2, port=4420 00:22:56.360 [2024-12-10 04:10:50.541337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a050 is same with the state(6) to be set 00:22:56.360 [2024-12-10 04:10:50.541359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a050 (9): Bad file descriptor 00:22:56.360 [2024-12-10 04:10:50.541391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:56.360 [2024-12-10 04:10:50.541409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:56.360 [2024-12-10 04:10:50.541428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:56.360 [2024-12-10 04:10:50.541441] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:56.360 [2024-12-10 04:10:50.541450] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:56.360 [2024-12-10 04:10:50.541457] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:56.360 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.360 [2024-12-10 04:10:50.551199] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:56.360 [2024-12-10 04:10:50.551220] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:56.360 [2024-12-10 04:10:50.551228] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:56.360 [2024-12-10 04:10:50.551235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:56.360 [2024-12-10 04:10:50.551273] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:56.360 [2024-12-10 04:10:50.551476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.360 [2024-12-10 04:10:50.551504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a050 with addr=10.0.0.2, port=4420 00:22:56.360 [2024-12-10 04:10:50.551520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a050 is same with the state(6) to be set 00:22:56.360 [2024-12-10 04:10:50.551542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a050 (9): Bad file descriptor 00:22:56.360 [2024-12-10 04:10:50.551601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:56.360 [2024-12-10 04:10:50.551621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:56.360 [2024-12-10 04:10:50.551634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:56.360 [2024-12-10 04:10:50.551646] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:56.360 [2024-12-10 04:10:50.551655] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:56.360 [2024-12-10 04:10:50.551662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:56.360 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:56.360 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.360 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:56.360 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:56.360 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.360 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.360 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:56.360 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:56.360 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:56.360 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:56.360 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.360 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.360 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:56.360 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:56.360 [2024-12-10 04:10:50.561306] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:56.360 [2024-12-10 04:10:50.561326] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:56.360 [2024-12-10 04:10:50.561335] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:56.360 [2024-12-10 04:10:50.561342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:56.360 [2024-12-10 04:10:50.561380] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:56.360 [2024-12-10 04:10:50.561553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.360 [2024-12-10 04:10:50.561581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a050 with addr=10.0.0.2, port=4420 00:22:56.360 [2024-12-10 04:10:50.561597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a050 is same with the state(6) to be set 00:22:56.360 [2024-12-10 04:10:50.561620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a050 (9): Bad file descriptor 00:22:56.360 [2024-12-10 04:10:50.561832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:56.360 [2024-12-10 04:10:50.561859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:56.360 [2024-12-10 04:10:50.561872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:56.360 [2024-12-10 04:10:50.561883] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:56.360 [2024-12-10 04:10:50.561891] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:56.360 [2024-12-10 04:10:50.561898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:56.360 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.360 [2024-12-10 04:10:50.571413] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:56.360 [2024-12-10 04:10:50.571433] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:56.360 [2024-12-10 04:10:50.571442] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:56.360 [2024-12-10 04:10:50.571449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:56.360 [2024-12-10 04:10:50.571485] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:56.360 [2024-12-10 04:10:50.571654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.360 [2024-12-10 04:10:50.571682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a050 with addr=10.0.0.2, port=4420 00:22:56.360 [2024-12-10 04:10:50.571698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a050 is same with the state(6) to be set 00:22:56.360 [2024-12-10 04:10:50.571720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a050 (9): Bad file descriptor 00:22:56.360 [2024-12-10 04:10:50.571752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:56.360 [2024-12-10 04:10:50.571769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:56.360 [2024-12-10 04:10:50.571788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:56.360 [2024-12-10 04:10:50.571801] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:56.360 [2024-12-10 04:10:50.571810] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:56.360 [2024-12-10 04:10:50.571818] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:56.360 [2024-12-10 04:10:50.581519] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:56.360 [2024-12-10 04:10:50.581539] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:56.361 [2024-12-10 04:10:50.581568] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:56.361 [2024-12-10 04:10:50.581577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:56.361 [2024-12-10 04:10:50.581602] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:56.361 [2024-12-10 04:10:50.581698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.361 [2024-12-10 04:10:50.581725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a050 with addr=10.0.0.2, port=4420 00:22:56.361 [2024-12-10 04:10:50.581741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a050 is same with the state(6) to be set 00:22:56.361 [2024-12-10 04:10:50.581762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a050 (9): Bad file descriptor 00:22:56.361 [2024-12-10 04:10:50.581820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:56.361 [2024-12-10 04:10:50.581840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:56.361 [2024-12-10 04:10:50.581854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:56.361 [2024-12-10 04:10:50.581866] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:56.361 [2024-12-10 04:10:50.581874] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:56.361 [2024-12-10 04:10:50.581882] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:56.361 [2024-12-10 04:10:50.591635] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:56.361 [2024-12-10 04:10:50.591654] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:56.361 [2024-12-10 04:10:50.591663] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:56.361 [2024-12-10 04:10:50.591670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:56.361 [2024-12-10 04:10:50.591693] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:56.361 [2024-12-10 04:10:50.591857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.361 [2024-12-10 04:10:50.591883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a050 with addr=10.0.0.2, port=4420 00:22:56.361 [2024-12-10 04:10:50.591899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a050 is same with the state(6) to be set 00:22:56.361 [2024-12-10 04:10:50.591920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a050 (9): Bad file descriptor 00:22:56.361 [2024-12-10 04:10:50.591952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:56.361 [2024-12-10 04:10:50.591974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:56.361 [2024-12-10 04:10:50.591989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:56.361 [2024-12-10 04:10:50.592000] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:56.361 [2024-12-10 04:10:50.592009] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:56.361 [2024-12-10 04:10:50.592017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:56.361 [2024-12-10 04:10:50.592967] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:56.361 [2024-12-10 04:10:50.592993] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:56.361 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:22:56.361 04:10:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.293 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:57.551 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.552 04:10:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.484 [2024-12-10 04:10:52.849511] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:58.484 [2024-12-10 04:10:52.849557] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:58.484 [2024-12-10 04:10:52.849582] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:58.742 [2024-12-10 04:10:52.937854] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:59.000 [2024-12-10 04:10:53.244375] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:22:59.000 [2024-12-10 04:10:53.245145] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1080dd0:1 started. 00:22:59.000 [2024-12-10 04:10:53.247212] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:59.000 [2024-12-10 04:10:53.247245] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:59.000 [2024-12-10 04:10:53.248901] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1080dd0 was disconnected and freed. delete nvme_qpair. 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.000 request: 00:22:59.000 { 00:22:59.000 "name": "nvme", 00:22:59.000 "trtype": "tcp", 00:22:59.000 "traddr": "10.0.0.2", 00:22:59.000 "adrfam": "ipv4", 00:22:59.000 "trsvcid": "8009", 00:22:59.000 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:59.000 "wait_for_attach": true, 00:22:59.000 "method": "bdev_nvme_start_discovery", 00:22:59.000 "req_id": 1 00:22:59.000 } 00:22:59.000 Got JSON-RPC error response 00:22:59.000 response: 00:22:59.000 { 00:22:59.000 "code": -17, 00:22:59.000 "message": "File exists" 00:22:59.000 } 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.000 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.000 request: 00:22:59.000 { 00:22:59.000 "name": "nvme_second", 00:22:59.000 "trtype": "tcp", 00:22:59.000 "traddr": "10.0.0.2", 00:22:59.000 "adrfam": "ipv4", 00:22:59.000 "trsvcid": "8009", 00:22:59.000 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:59.000 "wait_for_attach": true, 00:22:59.000 "method": "bdev_nvme_start_discovery", 00:22:59.001 "req_id": 1 00:22:59.001 } 00:22:59.001 Got JSON-RPC error response 00:22:59.001 response: 00:22:59.001 { 00:22:59.001 "code": -17, 00:22:59.001 "message": "File exists" 00:22:59.001 } 00:22:59.001 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:59.001 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:59.001 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:59.001 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:59.001 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:59.001 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:59.001 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:59.001 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.001 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:59.001 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.001 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:59.001 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:59.001 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.259 04:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.192 [2024-12-10 04:10:54.446565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.192 [2024-12-10 04:10:54.446624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf289d0 with addr=10.0.0.2, port=8010 00:23:00.192 [2024-12-10 04:10:54.446652] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:00.192 [2024-12-10 04:10:54.446667] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:00.192 [2024-12-10 04:10:54.446680] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:01.124 [2024-12-10 04:10:55.448982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.124 [2024-12-10 04:10:55.449016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf289d0 with addr=10.0.0.2, port=8010 00:23:01.124 [2024-12-10 04:10:55.449052] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:01.124 [2024-12-10 04:10:55.449066] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:01.124 [2024-12-10 04:10:55.449078] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:02.498 [2024-12-10 04:10:56.451267] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:02.498 request: 00:23:02.498 { 00:23:02.498 "name": "nvme_second", 00:23:02.498 "trtype": "tcp", 00:23:02.498 "traddr": "10.0.0.2", 00:23:02.498 "adrfam": "ipv4", 00:23:02.498 "trsvcid": "8010", 00:23:02.498 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:02.498 "wait_for_attach": false, 00:23:02.498 "attach_timeout_ms": 3000, 00:23:02.498 "method": "bdev_nvme_start_discovery", 00:23:02.498 "req_id": 1 00:23:02.498 } 00:23:02.498 Got JSON-RPC error response 00:23:02.498 response: 00:23:02.498 { 00:23:02.498 "code": -110, 00:23:02.498 "message": "Connection timed out" 00:23:02.498 } 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2468714 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:02.498 rmmod nvme_tcp 00:23:02.498 rmmod nvme_fabrics 00:23:02.498 rmmod nvme_keyring 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2468694 ']' 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2468694 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2468694 ']' 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2468694 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2468694 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2468694' 00:23:02.498 killing process with pid 2468694 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2468694 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2468694 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.498 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.038 04:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:05.038 00:23:05.038 real 0m15.754s 00:23:05.038 user 0m23.785s 00:23:05.038 sys 0m3.086s 00:23:05.038 04:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.038 04:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.038 ************************************ 00:23:05.038 END TEST nvmf_host_discovery 00:23:05.038 ************************************ 00:23:05.038 04:10:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:05.038 04:10:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:05.038 04:10:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.038 04:10:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.038 ************************************ 00:23:05.038 START TEST nvmf_host_multipath_status 00:23:05.038 ************************************ 00:23:05.038 04:10:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:05.038 * Looking for test storage... 00:23:05.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:05.038 04:10:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:05.038 04:10:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:23:05.038 04:10:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:05.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.038 --rc genhtml_branch_coverage=1 00:23:05.038 --rc genhtml_function_coverage=1 00:23:05.038 --rc genhtml_legend=1 00:23:05.038 --rc geninfo_all_blocks=1 00:23:05.038 --rc geninfo_unexecuted_blocks=1 00:23:05.038 00:23:05.038 ' 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:05.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.038 --rc genhtml_branch_coverage=1 00:23:05.038 --rc genhtml_function_coverage=1 00:23:05.038 --rc genhtml_legend=1 00:23:05.038 --rc geninfo_all_blocks=1 00:23:05.038 --rc geninfo_unexecuted_blocks=1 00:23:05.038 00:23:05.038 ' 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:05.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.038 --rc genhtml_branch_coverage=1 00:23:05.038 --rc genhtml_function_coverage=1 00:23:05.038 --rc genhtml_legend=1 00:23:05.038 --rc geninfo_all_blocks=1 00:23:05.038 --rc geninfo_unexecuted_blocks=1 00:23:05.038 00:23:05.038 ' 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:05.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.038 --rc genhtml_branch_coverage=1 00:23:05.038 --rc genhtml_function_coverage=1 00:23:05.038 --rc genhtml_legend=1 00:23:05.038 --rc geninfo_all_blocks=1 00:23:05.038 --rc geninfo_unexecuted_blocks=1 00:23:05.038 00:23:05.038 ' 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.038 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:05.039 04:10:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:06.941 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:06.941 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:06.941 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.941 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:06.942 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.942 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:07.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:23:07.201 00:23:07.201 --- 10.0.0.2 ping statistics --- 00:23:07.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.201 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:23:07.201 00:23:07.201 --- 10.0.0.1 ping statistics --- 00:23:07.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.201 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2472150 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2472150 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2472150 ']' 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.201 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:07.201 [2024-12-10 04:11:01.463234] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:23:07.201 [2024-12-10 04:11:01.463333] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.201 [2024-12-10 04:11:01.538097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:07.459 [2024-12-10 04:11:01.599216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.459 [2024-12-10 04:11:01.599268] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.459 [2024-12-10 04:11:01.599297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.459 [2024-12-10 04:11:01.599308] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.459 [2024-12-10 04:11:01.599317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.459 [2024-12-10 04:11:01.600957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.459 [2024-12-10 04:11:01.600963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.459 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.459 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:07.459 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:07.459 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:07.459 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:07.459 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.459 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2472150 00:23:07.459 04:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:07.716 [2024-12-10 04:11:02.001938] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.716 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:07.974 Malloc0 00:23:07.974 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:08.231 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:08.489 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:08.747 [2024-12-10 04:11:03.100673] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.747 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:09.004 [2024-12-10 04:11:03.369288] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:09.262 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2472321 00:23:09.262 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.262 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2472321 /var/tmp/bdevperf.sock 00:23:09.262 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:09.262 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2472321 ']' 00:23:09.262 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.262 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.262 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.262 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.262 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:09.520 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.520 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:09.520 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:09.777 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:10.035 Nvme0n1 00:23:10.035 04:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:10.599 Nvme0n1 00:23:10.599 04:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:10.599 04:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:13.124 04:11:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:13.124 04:11:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:13.124 04:11:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:13.382 04:11:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:14.316 04:11:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:14.316 04:11:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:14.316 04:11:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.316 04:11:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:14.574 04:11:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.574 04:11:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:14.574 04:11:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.574 04:11:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:14.832 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:14.832 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:14.832 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.832 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:15.089 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.089 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:15.089 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.089 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:15.348 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.348 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:15.348 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.348 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:15.606 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.606 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:15.606 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.606 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:15.863 04:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.864 04:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:15.864 04:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:16.121 04:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:16.379 04:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:17.753 04:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:17.753 04:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:17.753 04:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.753 04:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:17.753 04:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:17.753 04:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:17.753 04:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.753 04:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:18.011 04:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.011 04:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:18.011 04:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.011 04:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:18.269 04:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.269 04:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:18.269 04:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.269 04:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:18.527 04:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.527 04:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:18.527 04:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.527 04:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:18.785 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.785 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:18.785 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.785 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:19.043 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.043 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:19.044 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:19.609 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:19.609 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:20.984 04:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:20.984 04:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:20.984 04:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.984 04:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:20.984 04:11:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.984 04:11:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:20.984 04:11:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.984 04:11:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:21.241 04:11:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:21.241 04:11:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:21.241 04:11:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.241 04:11:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:21.499 04:11:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.499 04:11:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:21.499 04:11:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.499 04:11:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:21.757 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.757 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:21.757 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.757 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:22.015 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.015 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:22.015 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.015 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:22.273 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.273 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:22.273 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:22.531 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:22.789 04:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:24.164 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:24.164 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:24.164 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.164 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:24.164 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.164 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:24.164 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.164 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:24.457 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:24.457 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:24.457 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.457 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:24.744 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.744 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:24.744 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.744 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:25.003 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.003 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:25.003 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.003 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:25.261 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.261 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:25.261 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.261 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:25.519 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:25.519 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:25.519 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:25.777 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:26.035 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:27.406 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:27.406 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:27.406 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.406 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:27.406 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:27.406 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:27.406 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.406 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:27.664 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:27.664 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:27.664 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.664 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:27.921 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.921 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:27.922 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.922 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:28.179 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.179 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:28.179 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.179 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:28.437 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:28.437 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:28.437 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.437 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:28.695 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:28.695 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:28.695 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:28.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:29.210 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:30.585 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:30.585 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:30.585 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.585 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:30.585 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:30.585 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:30.585 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.585 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:30.851 04:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.851 04:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:30.851 04:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.851 04:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:31.110 04:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.110 04:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:31.110 04:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.110 04:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:31.368 04:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.368 04:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:31.368 04:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.368 04:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:31.626 04:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:31.626 04:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:31.626 04:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.626 04:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:31.884 04:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.884 04:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:32.142 04:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:32.142 04:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:32.400 04:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:32.964 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:33.897 04:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:33.897 04:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:33.897 04:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.897 04:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:34.155 04:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.155 04:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:34.155 04:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.155 04:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:34.413 04:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.413 04:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:34.413 04:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.413 04:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:34.671 04:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.671 04:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:34.671 04:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.671 04:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:34.929 04:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.929 04:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:34.929 04:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.929 04:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:35.187 04:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.187 04:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:35.187 04:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.187 04:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:35.445 04:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.445 04:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:35.445 04:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:35.703 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:35.961 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:37.334 04:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:37.334 04:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:37.334 04:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.334 04:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:37.334 04:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:37.334 04:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:37.334 04:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.334 04:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:37.593 04:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.593 04:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:37.593 04:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.593 04:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:37.850 04:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.850 04:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:37.850 04:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.850 04:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:38.107 04:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.107 04:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:38.107 04:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.107 04:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:38.365 04:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.365 04:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:38.365 04:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.365 04:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:38.623 04:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.623 04:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:38.623 04:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:39.189 04:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:39.189 04:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:40.562 04:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:40.562 04:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:40.562 04:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.562 04:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:40.562 04:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.562 04:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:40.562 04:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.562 04:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:40.820 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.820 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:40.820 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.820 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:41.077 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.077 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:41.077 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.077 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:41.334 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.334 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:41.334 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.334 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:41.592 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.592 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:41.592 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.592 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:41.850 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.850 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:41.850 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:42.416 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:42.416 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:43.790 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:43.790 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:43.790 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.790 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:43.790 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.790 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:43.790 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.790 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:44.048 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:44.048 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:44.048 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.048 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:44.307 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.307 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:44.307 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.307 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:44.565 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.565 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:44.565 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.565 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:44.823 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.823 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:44.823 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.823 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:45.081 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.081 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2472321 00:23:45.081 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2472321 ']' 00:23:45.081 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2472321 00:23:45.081 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:45.081 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.081 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2472321 00:23:45.349 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:45.349 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:45.349 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2472321' 00:23:45.349 killing process with pid 2472321 00:23:45.349 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2472321 00:23:45.349 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2472321 00:23:45.349 { 00:23:45.349 "results": [ 00:23:45.349 { 00:23:45.349 "job": "Nvme0n1", 00:23:45.349 "core_mask": "0x4", 00:23:45.349 "workload": "verify", 00:23:45.349 "status": "terminated", 00:23:45.349 "verify_range": { 00:23:45.349 "start": 0, 00:23:45.349 "length": 16384 00:23:45.349 }, 00:23:45.349 "queue_depth": 128, 00:23:45.349 "io_size": 4096, 00:23:45.349 "runtime": 34.334164, 00:23:45.349 "iops": 7959.215200346804, 00:23:45.349 "mibps": 31.0906843763547, 00:23:45.349 "io_failed": 0, 00:23:45.349 "io_timeout": 0, 00:23:45.349 "avg_latency_us": 16054.022558204242, 00:23:45.349 "min_latency_us": 694.802962962963, 00:23:45.349 "max_latency_us": 4026531.84 00:23:45.349 } 00:23:45.349 ], 00:23:45.349 "core_count": 1 00:23:45.349 } 00:23:45.349 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2472321 00:23:45.349 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:45.349 [2024-12-10 04:11:03.436019] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:23:45.349 [2024-12-10 04:11:03.436104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472321 ] 00:23:45.349 [2024-12-10 04:11:03.509235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.349 [2024-12-10 04:11:03.569263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.349 Running I/O for 90 seconds... 00:23:45.349 8382.00 IOPS, 32.74 MiB/s [2024-12-10T03:11:39.738Z] 8450.00 IOPS, 33.01 MiB/s [2024-12-10T03:11:39.738Z] 8468.33 IOPS, 33.08 MiB/s [2024-12-10T03:11:39.738Z] 8455.00 IOPS, 33.03 MiB/s [2024-12-10T03:11:39.738Z] 8482.60 IOPS, 33.14 MiB/s [2024-12-10T03:11:39.738Z] 8467.83 IOPS, 33.08 MiB/s [2024-12-10T03:11:39.738Z] 8452.14 IOPS, 33.02 MiB/s [2024-12-10T03:11:39.738Z] 8434.00 IOPS, 32.95 MiB/s [2024-12-10T03:11:39.738Z] 8432.78 IOPS, 32.94 MiB/s [2024-12-10T03:11:39.738Z] 8439.10 IOPS, 32.97 MiB/s [2024-12-10T03:11:39.738Z] 8429.64 IOPS, 32.93 MiB/s [2024-12-10T03:11:39.738Z] 8428.00 IOPS, 32.92 MiB/s [2024-12-10T03:11:39.738Z] 8424.69 IOPS, 32.91 MiB/s [2024-12-10T03:11:39.738Z] 8430.36 IOPS, 32.93 MiB/s [2024-12-10T03:11:39.738Z] [2024-12-10 04:11:20.077034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.349 [2024-12-10 04:11:20.077094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:45.349 [2024-12-10 04:11:20.077180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.349 [2024-12-10 04:11:20.077210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:45.349 [2024-12-10 04:11:20.077250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.349 [2024-12-10 04:11:20.077277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:45.349 [2024-12-10 04:11:20.077315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.349 [2024-12-10 04:11:20.077353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:45.349 [2024-12-10 04:11:20.077390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.349 [2024-12-10 04:11:20.077416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:45.349 [2024-12-10 04:11:20.077453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.349 [2024-12-10 04:11:20.077494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:45.349 [2024-12-10 04:11:20.077529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.349 [2024-12-10 04:11:20.077578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:45.349 [2024-12-10 04:11:20.077616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.349 [2024-12-10 04:11:20.077641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.349 [2024-12-10 04:11:20.077762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.349 [2024-12-10 04:11:20.077794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:45.349 [2024-12-10 04:11:20.077838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.349 [2024-12-10 04:11:20.077883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:45.349 [2024-12-10 04:11:20.077922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.349 [2024-12-10 04:11:20.077950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:45.349 [2024-12-10 04:11:20.077988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.349 [2024-12-10 04:11:20.078015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:45.349 [2024-12-10 04:11:20.078054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.349 [2024-12-10 04:11:20.078081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:45.349 [2024-12-10 04:11:20.078117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.349 [2024-12-10 04:11:20.078159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.078195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.078237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.078274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.078301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.078337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.078364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.078401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.078428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.078463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.078490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.078526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.078561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.078601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.078628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.078665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.078697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.078736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.078770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.078823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.078850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.078974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.079002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.079043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.079069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.079105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.079130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.079166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.079192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.079229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.079254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.079291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.079315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.079366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.079391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.079423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.079449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.079486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.079512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.079559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.079587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.079631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.079672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.079709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.079733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.079764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.079785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.079822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.079847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.079890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.079916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.079952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.079976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.080027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.080052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.080089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.080114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.080152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.080177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.080215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.080240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.080278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.080303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.080346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.080370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.080410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.080433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.080463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.080486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.080651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.080682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.080726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.080753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.080794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.080819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.080859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.080898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.080949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.350 [2024-12-10 04:11:20.080974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.350 [2024-12-10 04:11:20.081014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.351 [2024-12-10 04:11:20.081041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.081081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.351 [2024-12-10 04:11:20.081106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.081144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.081169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.081208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.081233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.081272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.081298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.081337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.081367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.081407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.081432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.081472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.081496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.081535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.081571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.081614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.081639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.081679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.081706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.081746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.081773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.081812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.081841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.081881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.081909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.081947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.081974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.082013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.082040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.082080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.082107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.082146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.351 [2024-12-10 04:11:20.082179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.082221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.082247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.082288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.082315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.082356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.082383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.082424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.082454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.082494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.082520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.082571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.082600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.082640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.082666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.082708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.082735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.082776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.082804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.082845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.082873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.082913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.082941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.082981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.083014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.083054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.083081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.083121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.083147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.083189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.083216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.083257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.083284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.083324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.083353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.083391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.083419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.083459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.083486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.083526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.083563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.083606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.083633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.351 [2024-12-10 04:11:20.083674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.351 [2024-12-10 04:11:20.083701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.083742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.083770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.083810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.083837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.083882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.083908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.083947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.083974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.084023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.084050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.084091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.084118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.084159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.084186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.084226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.084253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.084292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.084321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.084360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.084387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.084594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.084631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.084687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.084714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.084761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.084788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.084832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.084875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.084924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.084950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.084995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.085022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.085066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.085093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.085135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.085162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.085204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.352 [2024-12-10 04:11:20.085231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.085274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.352 [2024-12-10 04:11:20.085299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.085344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.352 [2024-12-10 04:11:20.085370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.085413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.352 [2024-12-10 04:11:20.085440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.085482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.352 [2024-12-10 04:11:20.085509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.085575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.352 [2024-12-10 04:11:20.085603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.085647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.352 [2024-12-10 04:11:20.085674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.085720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.085748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.085792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.085826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.085870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.085898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.085941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.085981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.086025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.086052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.086095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.086122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.086163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.086190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.086231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:20.086257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.086300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.352 [2024-12-10 04:11:20.086325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:20.086370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.352 [2024-12-10 04:11:20.086396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.352 8410.13 IOPS, 32.85 MiB/s [2024-12-10T03:11:39.741Z] 7884.50 IOPS, 30.80 MiB/s [2024-12-10T03:11:39.741Z] 7420.71 IOPS, 28.99 MiB/s [2024-12-10T03:11:39.741Z] 7008.44 IOPS, 27.38 MiB/s [2024-12-10T03:11:39.741Z] 6654.37 IOPS, 25.99 MiB/s [2024-12-10T03:11:39.741Z] 6734.55 IOPS, 26.31 MiB/s [2024-12-10T03:11:39.741Z] 6809.90 IOPS, 26.60 MiB/s [2024-12-10T03:11:39.741Z] 6924.73 IOPS, 27.05 MiB/s [2024-12-10T03:11:39.741Z] 7132.39 IOPS, 27.86 MiB/s [2024-12-10T03:11:39.741Z] 7297.58 IOPS, 28.51 MiB/s [2024-12-10T03:11:39.741Z] 7459.80 IOPS, 29.14 MiB/s [2024-12-10T03:11:39.741Z] 7488.69 IOPS, 29.25 MiB/s [2024-12-10T03:11:39.741Z] 7515.11 IOPS, 29.36 MiB/s [2024-12-10T03:11:39.741Z] 7536.82 IOPS, 29.44 MiB/s [2024-12-10T03:11:39.741Z] 7621.14 IOPS, 29.77 MiB/s [2024-12-10T03:11:39.741Z] 7741.97 IOPS, 30.24 MiB/s [2024-12-10T03:11:39.741Z] 7860.35 IOPS, 30.70 MiB/s [2024-12-10T03:11:39.741Z] [2024-12-10 04:11:36.757134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:36.757203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.352 [2024-12-10 04:11:36.757253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.352 [2024-12-10 04:11:36.757282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.757344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.353 [2024-12-10 04:11:36.757371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.757406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.353 [2024-12-10 04:11:36.757430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.757465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.353 [2024-12-10 04:11:36.757490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.757525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.353 [2024-12-10 04:11:36.757572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.757610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.353 [2024-12-10 04:11:36.757635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.757669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.757694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.757729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.757754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.757786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.757810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.757847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.757887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.757921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.757946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.757981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.758006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.758039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.758065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.758104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.758130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.758164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.758189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.758222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.758246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.758281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.758306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.758340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.758367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.758401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.758426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.758459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.758484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.758518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.758568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.758620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.758646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.758685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.758711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.758748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.758775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.758813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.758855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.758890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.758937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.758972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.758998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.759033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.759058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.759092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.759117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.759151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.759176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.759209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.759235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.759269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.759294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.759329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.759353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.353 [2024-12-10 04:11:36.759386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.353 [2024-12-10 04:11:36.759410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.759445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.759469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.759504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.759529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.759591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.759618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.759655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.759686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.759722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.759749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.759785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.759812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.759847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.759887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.759920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.759945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.759980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.760005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.760791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.354 [2024-12-10 04:11:36.760826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.760868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.354 [2024-12-10 04:11:36.760895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.760931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.354 [2024-12-10 04:11:36.760958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.760995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.761021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.761058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.761086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.761136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.761162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.761199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.761224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.761266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.761292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.761327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.761353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.761387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.354 [2024-12-10 04:11:36.761413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.761448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.354 [2024-12-10 04:11:36.761474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.761507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.354 [2024-12-10 04:11:36.761555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.761593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.354 [2024-12-10 04:11:36.761619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.761655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.354 [2024-12-10 04:11:36.761681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.761717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.354 [2024-12-10 04:11:36.761743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.764657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.764695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.764738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.764766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.764803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.764830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.764867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.764895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.764938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.764966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.765021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.765046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.765097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.765121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.765157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.765182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.765217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.765242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.765277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.765302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.765338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.765363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.765398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.765423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.765458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.765483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.765518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.765551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.354 [2024-12-10 04:11:36.765603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.354 [2024-12-10 04:11:36.765629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.765663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.765689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.765724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.765756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.765792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.765817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.765867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.765893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.765927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.765952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.765986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.766011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.766046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.766072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.766106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.766131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.766164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.766189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.766224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.766248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.766281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.766305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.766338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.766364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.766399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.766423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.766457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.766488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.766524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.766574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.766626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.766652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.766690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.766717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.766755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.766781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.766819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.766846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.766882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.766923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.766960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.766985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.767020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.767046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.767082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.767108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.767158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.767182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.767216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.767242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.767275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.767300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.767345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.767370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.767404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.767428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.767463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.767488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.767523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.767570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.767610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.767636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.767673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.767698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.767734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.767760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.767796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.767822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.767871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.767897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.767930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.767955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.767988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.768014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.768047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.355 [2024-12-10 04:11:36.768072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.768112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.768137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.355 [2024-12-10 04:11:36.768172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.355 [2024-12-10 04:11:36.768197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.768233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.768258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.768293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.768319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.768354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.768378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.768413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.768439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.771585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.771620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.771664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.771693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.771746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.771771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.771808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.771834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.771884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.771909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.771944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.771969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.772005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.772036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.772071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.772111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.772146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.772172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.772206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.772232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.772267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.772293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.772327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.772352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.772387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.772412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.772461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.772487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.772523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.772561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.772601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.772628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.772665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.772691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.772729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.772755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.772791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.772838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.772874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.772914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.772949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.772975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.773009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.773034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.773067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.773092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.773126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.773150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.773199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.773225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.773278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.773305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.773342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.773374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.773412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.773438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.773476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.773504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.773565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.356 [2024-12-10 04:11:36.773593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.773644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.773671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.773715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.773743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.773779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.773806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.773856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.773882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.773931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.773956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.356 [2024-12-10 04:11:36.773988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.356 [2024-12-10 04:11:36.774011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.774044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.774068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.774103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.774128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.774162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.774187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.774222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.774247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.774281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.774304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.774337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.774363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.774398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.774422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.774461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.774487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.774535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.774573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.774611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.774639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.774678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.774705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.775561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.775595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.775637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.775666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.775704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.775732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.775769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.775796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.775855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.775879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.775927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.775951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.775984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.776008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.776042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.776067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.776103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.776133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.776168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.776192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.776226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.776250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.776283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.776307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.776339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.776378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.776412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.776438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.776471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.776495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.776562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.776591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.776628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.776655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.778726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.778759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.778802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.778845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.778882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.778921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.778955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.357 [2024-12-10 04:11:36.778986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.779021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.779045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.779078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.779102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.779137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.779161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.779195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.779219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.779254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.779279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.779312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.779337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.779370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.779394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.357 [2024-12-10 04:11:36.779429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.357 [2024-12-10 04:11:36.779454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.779486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.779511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.779568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.779597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.779634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.779660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.779696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.779722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.779764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.358 [2024-12-10 04:11:36.779791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.779840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.779878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.779912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.779936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.779971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.779995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.780028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.780052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.780086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.358 [2024-12-10 04:11:36.780113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.780146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.358 [2024-12-10 04:11:36.780171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.780204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.358 [2024-12-10 04:11:36.780229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.780260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.358 [2024-12-10 04:11:36.780284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.780316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.358 [2024-12-10 04:11:36.780339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.780371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.780396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.780428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.358 [2024-12-10 04:11:36.780452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.780490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.358 [2024-12-10 04:11:36.780513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.780573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.780601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.780639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.780665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.780702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.780727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.780765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.780791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.783209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.358 [2024-12-10 04:11:36.783242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.783283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.358 [2024-12-10 04:11:36.783309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.783344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.358 [2024-12-10 04:11:36.783369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.783403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.358 [2024-12-10 04:11:36.783427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.783462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.358 [2024-12-10 04:11:36.783486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.783536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.358 [2024-12-10 04:11:36.783566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.783618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.783645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.783684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.783717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.783754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.783782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.783819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.783846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.783883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.783909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.783959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.783986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.784034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.784059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.784091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.358 [2024-12-10 04:11:36.784115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.358 [2024-12-10 04:11:36.784148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.784172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.784205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.784228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.784260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.784284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.784317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.784341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.784373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.784411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.784446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.359 [2024-12-10 04:11:36.784494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.784532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.784568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.784605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.784632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.784670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.784697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.784733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.784760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.784796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.359 [2024-12-10 04:11:36.784833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.784882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.359 [2024-12-10 04:11:36.784908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.784955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.359 [2024-12-10 04:11:36.784993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.785025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.359 [2024-12-10 04:11:36.785049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.785081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.785105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.785137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.785161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.785194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.785218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.785251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.785280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.785315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.359 [2024-12-10 04:11:36.785339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.785374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.359 [2024-12-10 04:11:36.785399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.787254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.787288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.787344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.787373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.787409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.787436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.787471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.787498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.787535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.359 [2024-12-10 04:11:36.787571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.787609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.359 [2024-12-10 04:11:36.787636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.787674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.359 [2024-12-10 04:11:36.787701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.787738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.359 [2024-12-10 04:11:36.787765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.787801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.787844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.787879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.787918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.787956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.359 [2024-12-10 04:11:36.787980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.788012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.359 [2024-12-10 04:11:36.788037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.788069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.788092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.788124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.788148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.788180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.788204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.788235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.788259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.788292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.788315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.788348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.788371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.788404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.788427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.359 [2024-12-10 04:11:36.788461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.359 [2024-12-10 04:11:36.788485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.788520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.788571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.788615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.788641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.788685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.788713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.788749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.788776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.788811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.788841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.788888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.788912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.788950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.788973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.789005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.789028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.789060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.789084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.789116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.789141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.789174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.789198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.789230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.789254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.789288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.789312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.789346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.789370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.789405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.789437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.792780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.792814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.792862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.792891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.792928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.792957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.792995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.793023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.793074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.793100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.793135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.793162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.793198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.793238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.793271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.793295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.793328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.793352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.793384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.793408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.793440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.793464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.793497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.793540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.793587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.793628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.793664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.793690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.793725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.793751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.793786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.793811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.793865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.793890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.793938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.793962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.793995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.794020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.794052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.794077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.794109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.794133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.794165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.794190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.794222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.794245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.794278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.360 [2024-12-10 04:11:36.794302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.360 [2024-12-10 04:11:36.794340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.360 [2024-12-10 04:11:36.794364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.794413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.794438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.794473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.794498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.794533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.794583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.794620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.794646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.794680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.794707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.794743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.794769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.794803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.794828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.794876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.794900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.794948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.794973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.795008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.795033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.795069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.795095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.795137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.795162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.795199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.795238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.795273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.795298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.797874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.797920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.797962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.797989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.798041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.798066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.798101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.798126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.798161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.798186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.798219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.798244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.798277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.798317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.798350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.798390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.798425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.798452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.798488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.798520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.798566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.798594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.798632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.798659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.798696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.798722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.798775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.798800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.798837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.798879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.798930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.798956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.799005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.799033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.799069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.799097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.799132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.799158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.799194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.799222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.799257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.799283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.799319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.799351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.799389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.799416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.799453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.361 [2024-12-10 04:11:36.799480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.799517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.799552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.361 [2024-12-10 04:11:36.799592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.361 [2024-12-10 04:11:36.799620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.799656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.799683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.799719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.799747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.799797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.799823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.799871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.799911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.799943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.799983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.800017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.800041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.800076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.800100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.800136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.800161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.800932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.800963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.801005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.801032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.801085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.801111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.801163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.801187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.801222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.801247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.801295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.801323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.801374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.801401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.801438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.801465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.801501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.801527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.801575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.801602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.801638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.801664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.801702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.801730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.803955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.803997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.804037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.804063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.804113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.804137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.804172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.804197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.804231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.804256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.804292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.804317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.804352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.804376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.804426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.804450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.804483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.804508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.804564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.804591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.804626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.804653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.804688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.804714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.804748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.804780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.804815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.362 [2024-12-10 04:11:36.804854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:45.362 [2024-12-10 04:11:36.804888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.362 [2024-12-10 04:11:36.804926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.804961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.804985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.805019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.805043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.805076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.805099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.805134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.805158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.805191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.805215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.805248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.805272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.805306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.805330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.805363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.805387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.805418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.805442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.805475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.805504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.805538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.805589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.805626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.805650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.807955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.807986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.808050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.808081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.808121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.808149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.808188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.808215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.808265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.808291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.808341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.808380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.808414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.808438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.808473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.808497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.808553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.808580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.808615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.808640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.808681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.808720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.808756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.808783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.808820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.808848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.808885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.808911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.808949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.808975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.809013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.809039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.809077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.809105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.809142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.809183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.809218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.809258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.809292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.809332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.809363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.809387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.809419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.809445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.809484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.809508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.809566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.809605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.809643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.809669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.810492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.810538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.810605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.363 [2024-12-10 04:11:36.810632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.363 [2024-12-10 04:11:36.810669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.363 [2024-12-10 04:11:36.810695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.364 [2024-12-10 04:11:36.810733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.364 [2024-12-10 04:11:36.810760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.364 [2024-12-10 04:11:36.810811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.364 [2024-12-10 04:11:36.810850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.364 [2024-12-10 04:11:36.810885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.364 [2024-12-10 04:11:36.810927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.364 [2024-12-10 04:11:36.810962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.364 [2024-12-10 04:11:36.811004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.364 [2024-12-10 04:11:36.811040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.364 [2024-12-10 04:11:36.811067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.364 [2024-12-10 04:11:36.811103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.364 [2024-12-10 04:11:36.811129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.364 [2024-12-10 04:11:36.811165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.364 [2024-12-10 04:11:36.811200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.364 [2024-12-10 04:11:36.811251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.364 [2024-12-10 04:11:36.811276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.364 [2024-12-10 04:11:36.811324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.364 [2024-12-10 04:11:36.811350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.364 [2024-12-10 04:11:36.811397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.364 [2024-12-10 04:11:36.811421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.364 7924.94 IOPS, 30.96 MiB/s [2024-12-10T03:11:39.753Z] 7938.52 IOPS, 31.01 MiB/s [2024-12-10T03:11:39.753Z] 7957.68 IOPS, 31.08 MiB/s [2024-12-10T03:11:39.753Z] Received shutdown signal, test time was about 34.334963 seconds 00:23:45.364 00:23:45.364 Latency(us) 00:23:45.364 [2024-12-10T03:11:39.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.364 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:45.364 Verification LBA range: start 0x0 length 0x4000 00:23:45.364 Nvme0n1 : 34.33 7959.22 31.09 0.00 0.00 16054.02 694.80 4026531.84 00:23:45.364 [2024-12-10T03:11:39.753Z] =================================================================================================================== 00:23:45.364 [2024-12-10T03:11:39.753Z] Total : 7959.22 31.09 0.00 0.00 16054.02 694.80 4026531.84 00:23:45.364 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:45.622 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:45.622 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:45.622 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:45.622 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:45.622 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:45.622 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:45.622 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:45.622 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:45.622 04:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:45.622 rmmod nvme_tcp 00:23:45.880 rmmod nvme_fabrics 00:23:45.880 rmmod nvme_keyring 00:23:45.880 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:45.880 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:45.880 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:45.880 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2472150 ']' 00:23:45.880 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2472150 00:23:45.880 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2472150 ']' 00:23:45.880 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2472150 00:23:45.880 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:45.880 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.880 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2472150 00:23:45.880 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:45.880 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:45.880 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2472150' 00:23:45.880 killing process with pid 2472150 00:23:45.880 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2472150 00:23:45.880 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2472150 00:23:46.138 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:46.138 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:46.138 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:46.138 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:46.138 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:46.138 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:46.138 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:46.138 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:46.138 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:46.138 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.138 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.138 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.047 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:48.047 00:23:48.047 real 0m43.441s 00:23:48.047 user 2m10.707s 00:23:48.047 sys 0m11.482s 00:23:48.047 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.047 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:48.047 ************************************ 00:23:48.047 END TEST nvmf_host_multipath_status 00:23:48.047 ************************************ 00:23:48.047 04:11:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:48.047 04:11:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:48.047 04:11:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.047 04:11:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.047 ************************************ 00:23:48.047 START TEST nvmf_discovery_remove_ifc 00:23:48.047 ************************************ 00:23:48.047 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:48.306 * Looking for test storage... 00:23:48.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:48.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.306 --rc genhtml_branch_coverage=1 00:23:48.306 --rc genhtml_function_coverage=1 00:23:48.306 --rc genhtml_legend=1 00:23:48.306 --rc geninfo_all_blocks=1 00:23:48.306 --rc geninfo_unexecuted_blocks=1 00:23:48.306 00:23:48.306 ' 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:48.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.306 --rc genhtml_branch_coverage=1 00:23:48.306 --rc genhtml_function_coverage=1 00:23:48.306 --rc genhtml_legend=1 00:23:48.306 --rc geninfo_all_blocks=1 00:23:48.306 --rc geninfo_unexecuted_blocks=1 00:23:48.306 00:23:48.306 ' 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:48.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.306 --rc genhtml_branch_coverage=1 00:23:48.306 --rc genhtml_function_coverage=1 00:23:48.306 --rc genhtml_legend=1 00:23:48.306 --rc geninfo_all_blocks=1 00:23:48.306 --rc geninfo_unexecuted_blocks=1 00:23:48.306 00:23:48.306 ' 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:48.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.306 --rc genhtml_branch_coverage=1 00:23:48.306 --rc genhtml_function_coverage=1 00:23:48.306 --rc genhtml_legend=1 00:23:48.306 --rc geninfo_all_blocks=1 00:23:48.306 --rc geninfo_unexecuted_blocks=1 00:23:48.306 00:23:48.306 ' 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.306 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:48.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:23:48.307 04:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:50.919 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:50.919 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:50.919 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:50.919 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.919 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:50.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:23:50.920 00:23:50.920 --- 10.0.0.2 ping statistics --- 00:23:50.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.920 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:23:50.920 00:23:50.920 --- 10.0.0.1 ping statistics --- 00:23:50.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.920 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2478792 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2478792 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2478792 ']' 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.920 04:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.920 [2024-12-10 04:11:44.984738] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:23:50.920 [2024-12-10 04:11:44.984828] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.920 [2024-12-10 04:11:45.054984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.920 [2024-12-10 04:11:45.107985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.920 [2024-12-10 04:11:45.108048] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.920 [2024-12-10 04:11:45.108062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.920 [2024-12-10 04:11:45.108074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.920 [2024-12-10 04:11:45.108083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.920 [2024-12-10 04:11:45.108762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.920 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.920 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:50.920 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:50.920 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:50.920 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.920 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.920 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:50.920 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.920 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.920 [2024-12-10 04:11:45.266421] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.920 [2024-12-10 04:11:45.274683] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:50.920 null0 00:23:51.178 [2024-12-10 04:11:45.306569] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.178 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.178 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2478932 00:23:51.178 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:51.178 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2478932 /tmp/host.sock 00:23:51.178 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2478932 ']' 00:23:51.178 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:51.178 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.178 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:51.178 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:51.178 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.178 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:51.178 [2024-12-10 04:11:45.373788] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:23:51.178 [2024-12-10 04:11:45.373869] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478932 ] 00:23:51.178 [2024-12-10 04:11:45.442602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.178 [2024-12-10 04:11:45.503020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.437 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.437 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:51.437 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.437 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:51.437 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.437 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:51.437 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.437 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:51.437 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.437 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:51.437 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.437 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:51.437 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.437 04:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:52.820 [2024-12-10 04:11:46.794656] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:52.820 [2024-12-10 04:11:46.794690] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:52.820 [2024-12-10 04:11:46.794714] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:52.820 [2024-12-10 04:11:46.880986] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:52.820 [2024-12-10 04:11:47.023028] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:52.820 [2024-12-10 04:11:47.024061] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f28650:1 started. 00:23:52.820 [2024-12-10 04:11:47.025770] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:52.820 [2024-12-10 04:11:47.025826] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:52.820 [2024-12-10 04:11:47.025882] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:52.820 [2024-12-10 04:11:47.025904] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:52.820 [2024-12-10 04:11:47.025944] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:52.820 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.820 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:52.821 [2024-12-10 04:11:47.073307] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f28650 was disconnected and freed. delete nvme_qpair. 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:52.821 04:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:54.197 04:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:54.197 04:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.197 04:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:54.197 04:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.197 04:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:54.197 04:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:54.197 04:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:54.197 04:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.197 04:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:54.197 04:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:55.134 04:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:55.134 04:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.134 04:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:55.134 04:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.134 04:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:55.134 04:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.134 04:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:55.134 04:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.134 04:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:55.134 04:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:56.072 04:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:56.072 04:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.072 04:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.072 04:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:56.072 04:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:56.072 04:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:56.072 04:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:56.072 04:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.072 04:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:56.072 04:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:57.011 04:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:57.011 04:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.011 04:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:57.011 04:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.011 04:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:57.011 04:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:57.011 04:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:57.011 04:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.011 04:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:57.011 04:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:58.391 04:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:58.391 04:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:58.391 04:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:58.391 04:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.391 04:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:58.391 04:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:58.391 04:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:58.391 04:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.391 04:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:58.391 04:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:58.391 [2024-12-10 04:11:52.467354] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:58.391 [2024-12-10 04:11:52.467436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.391 [2024-12-10 04:11:52.467458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.391 [2024-12-10 04:11:52.467477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.391 [2024-12-10 04:11:52.467489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.391 [2024-12-10 04:11:52.467502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.391 [2024-12-10 04:11:52.467514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.391 [2024-12-10 04:11:52.467526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.391 [2024-12-10 04:11:52.467537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.391 [2024-12-10 04:11:52.467558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.391 [2024-12-10 04:11:52.467570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.391 [2024-12-10 04:11:52.467583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04e90 is same with the state(6) to be set 00:23:58.391 [2024-12-10 04:11:52.477375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f04e90 (9): Bad file descriptor 00:23:58.391 [2024-12-10 04:11:52.487415] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:58.391 [2024-12-10 04:11:52.487436] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:58.391 [2024-12-10 04:11:52.487448] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:58.391 [2024-12-10 04:11:52.487457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:58.391 [2024-12-10 04:11:52.487516] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:59.326 04:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:59.326 04:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:59.326 04:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:59.326 04:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.326 04:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:59.326 04:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:59.326 04:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:59.326 [2024-12-10 04:11:53.494587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:59.326 [2024-12-10 04:11:53.494650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f04e90 with addr=10.0.0.2, port=4420 00:23:59.326 [2024-12-10 04:11:53.494691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04e90 is same with the state(6) to be set 00:23:59.326 [2024-12-10 04:11:53.494729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f04e90 (9): Bad file descriptor 00:23:59.326 [2024-12-10 04:11:53.495181] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:59.326 [2024-12-10 04:11:53.495221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:59.326 [2024-12-10 04:11:53.495238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:59.326 [2024-12-10 04:11:53.495253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:59.326 [2024-12-10 04:11:53.495265] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:59.326 [2024-12-10 04:11:53.495275] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:59.326 [2024-12-10 04:11:53.495282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:59.326 [2024-12-10 04:11:53.495294] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:59.326 [2024-12-10 04:11:53.495303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:59.326 04:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.326 04:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:59.326 04:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:00.264 [2024-12-10 04:11:54.497790] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:00.264 [2024-12-10 04:11:54.497817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:00.264 [2024-12-10 04:11:54.497834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:00.264 [2024-12-10 04:11:54.497846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:00.264 [2024-12-10 04:11:54.497872] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:00.264 [2024-12-10 04:11:54.497883] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:00.264 [2024-12-10 04:11:54.497891] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:00.264 [2024-12-10 04:11:54.497898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:00.264 [2024-12-10 04:11:54.497948] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:00.264 [2024-12-10 04:11:54.497997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.264 [2024-12-10 04:11:54.498019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.264 [2024-12-10 04:11:54.498038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.264 [2024-12-10 04:11:54.498050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.264 [2024-12-10 04:11:54.498063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.264 [2024-12-10 04:11:54.498082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.264 [2024-12-10 04:11:54.498095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.264 [2024-12-10 04:11:54.498107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.264 [2024-12-10 04:11:54.498120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.264 [2024-12-10 04:11:54.498132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.264 [2024-12-10 04:11:54.498144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:00.264 [2024-12-10 04:11:54.498195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef45e0 (9): Bad file descriptor 00:24:00.264 [2024-12-10 04:11:54.499186] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:00.264 [2024-12-10 04:11:54.499207] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:00.264 04:11:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:01.647 04:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:01.647 04:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:01.647 04:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:01.647 04:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.647 04:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:01.647 04:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:01.647 04:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:01.647 04:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.647 04:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:01.647 04:11:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:02.215 [2024-12-10 04:11:56.549706] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:02.215 [2024-12-10 04:11:56.549731] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:02.215 [2024-12-10 04:11:56.549755] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:02.473 [2024-12-10 04:11:56.636037] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:02.473 04:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:02.473 04:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:02.473 04:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:02.473 04:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.473 04:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:02.473 04:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:02.473 04:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:02.473 04:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.473 [2024-12-10 04:11:56.690715] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:02.473 [2024-12-10 04:11:56.691490] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1f31f60:1 started. 00:24:02.473 [2024-12-10 04:11:56.692918] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:02.473 [2024-12-10 04:11:56.692963] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:02.473 [2024-12-10 04:11:56.692996] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:02.473 [2024-12-10 04:11:56.693017] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:02.473 [2024-12-10 04:11:56.693029] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:02.473 04:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:02.473 04:11:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:02.473 [2024-12-10 04:11:56.739165] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1f31f60 was disconnected and freed. delete nvme_qpair. 00:24:03.410 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:03.410 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:03.410 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.410 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:03.410 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:03.410 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:03.410 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:03.410 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.410 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:03.410 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:03.410 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2478932 00:24:03.410 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2478932 ']' 00:24:03.410 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2478932 00:24:03.410 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:03.410 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.410 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2478932 00:24:03.669 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:03.669 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:03.669 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2478932' 00:24:03.669 killing process with pid 2478932 00:24:03.669 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2478932 00:24:03.669 04:11:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2478932 00:24:03.669 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:03.669 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.669 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:03.669 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.669 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:03.669 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.669 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.669 rmmod nvme_tcp 00:24:03.669 rmmod nvme_fabrics 00:24:03.929 rmmod nvme_keyring 00:24:03.929 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.929 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:03.929 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:03.929 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2478792 ']' 00:24:03.929 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2478792 00:24:03.929 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2478792 ']' 00:24:03.929 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2478792 00:24:03.929 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:03.929 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.929 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2478792 00:24:03.929 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:03.929 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:03.929 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2478792' 00:24:03.929 killing process with pid 2478792 00:24:03.929 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2478792 00:24:03.929 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2478792 00:24:04.189 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.189 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.189 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.189 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:04.190 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:04.190 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.190 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.190 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.190 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.190 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.190 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.190 04:11:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.095 04:12:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:06.095 00:24:06.095 real 0m17.992s 00:24:06.095 user 0m26.002s 00:24:06.095 sys 0m3.105s 00:24:06.095 04:12:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.095 04:12:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:06.095 ************************************ 00:24:06.095 END TEST nvmf_discovery_remove_ifc 00:24:06.095 ************************************ 00:24:06.095 04:12:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:06.095 04:12:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:06.095 04:12:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.095 04:12:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.095 ************************************ 00:24:06.095 START TEST nvmf_identify_kernel_target 00:24:06.095 ************************************ 00:24:06.095 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:06.353 * Looking for test storage... 00:24:06.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.353 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:06.353 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:24:06.353 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:06.353 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:06.353 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.353 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.353 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.353 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.353 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.353 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.353 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.353 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.353 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.353 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:06.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.354 --rc genhtml_branch_coverage=1 00:24:06.354 --rc genhtml_function_coverage=1 00:24:06.354 --rc genhtml_legend=1 00:24:06.354 --rc geninfo_all_blocks=1 00:24:06.354 --rc geninfo_unexecuted_blocks=1 00:24:06.354 00:24:06.354 ' 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:06.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.354 --rc genhtml_branch_coverage=1 00:24:06.354 --rc genhtml_function_coverage=1 00:24:06.354 --rc genhtml_legend=1 00:24:06.354 --rc geninfo_all_blocks=1 00:24:06.354 --rc geninfo_unexecuted_blocks=1 00:24:06.354 00:24:06.354 ' 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:06.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.354 --rc genhtml_branch_coverage=1 00:24:06.354 --rc genhtml_function_coverage=1 00:24:06.354 --rc genhtml_legend=1 00:24:06.354 --rc geninfo_all_blocks=1 00:24:06.354 --rc geninfo_unexecuted_blocks=1 00:24:06.354 00:24:06.354 ' 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:06.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.354 --rc genhtml_branch_coverage=1 00:24:06.354 --rc genhtml_function_coverage=1 00:24:06.354 --rc genhtml_legend=1 00:24:06.354 --rc geninfo_all_blocks=1 00:24:06.354 --rc geninfo_unexecuted_blocks=1 00:24:06.354 00:24:06.354 ' 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.354 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.256 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.256 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:08.256 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:08.256 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:08.256 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:08.256 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:08.256 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:08.256 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:08.257 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:08.257 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:08.515 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:08.516 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:08.516 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:08.516 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:08.516 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:08.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:24:08.516 00:24:08.516 --- 10.0.0.2 ping statistics --- 00:24:08.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.516 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:24:08.516 00:24:08.516 --- 10.0.0.1 ping statistics --- 00:24:08.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.516 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.516 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:08.517 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:09.893 Waiting for block devices as requested 00:24:09.893 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:09.893 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:09.893 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:10.152 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:10.152 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:10.152 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:10.152 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:10.412 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:10.412 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:10.412 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:10.412 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:10.672 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:10.672 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:10.672 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:10.672 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:10.931 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:10.931 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:10.931 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:10.931 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:10.931 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:10.931 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:10.931 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:11.188 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:11.188 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:11.188 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:11.188 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:11.188 No valid GPT data, bailing 00:24:11.188 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:11.188 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:11.188 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:11.188 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:11.188 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:11.188 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:11.188 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:11.188 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:11.188 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:11.188 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:11.188 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:11.188 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:11.189 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:11.189 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:11.189 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:11.189 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:11.189 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:11.189 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:11.189 00:24:11.189 Discovery Log Number of Records 2, Generation counter 2 00:24:11.189 =====Discovery Log Entry 0====== 00:24:11.189 trtype: tcp 00:24:11.189 adrfam: ipv4 00:24:11.189 subtype: current discovery subsystem 00:24:11.189 treq: not specified, sq flow control disable supported 00:24:11.189 portid: 1 00:24:11.189 trsvcid: 4420 00:24:11.189 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:11.189 traddr: 10.0.0.1 00:24:11.189 eflags: none 00:24:11.189 sectype: none 00:24:11.189 =====Discovery Log Entry 1====== 00:24:11.189 trtype: tcp 00:24:11.189 adrfam: ipv4 00:24:11.189 subtype: nvme subsystem 00:24:11.189 treq: not specified, sq flow control disable supported 00:24:11.189 portid: 1 00:24:11.189 trsvcid: 4420 00:24:11.189 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:11.189 traddr: 10.0.0.1 00:24:11.189 eflags: none 00:24:11.189 sectype: none 00:24:11.189 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:11.189 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:11.448 ===================================================== 00:24:11.448 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:11.448 ===================================================== 00:24:11.448 Controller Capabilities/Features 00:24:11.448 ================================ 00:24:11.448 Vendor ID: 0000 00:24:11.448 Subsystem Vendor ID: 0000 00:24:11.448 Serial Number: 440db5955d12b30863c1 00:24:11.448 Model Number: Linux 00:24:11.448 Firmware Version: 6.8.9-20 00:24:11.448 Recommended Arb Burst: 0 00:24:11.448 IEEE OUI Identifier: 00 00 00 00:24:11.448 Multi-path I/O 00:24:11.448 May have multiple subsystem ports: No 00:24:11.448 May have multiple controllers: No 00:24:11.448 Associated with SR-IOV VF: No 00:24:11.448 Max Data Transfer Size: Unlimited 00:24:11.448 Max Number of Namespaces: 0 00:24:11.448 Max Number of I/O Queues: 1024 00:24:11.448 NVMe Specification Version (VS): 1.3 00:24:11.448 NVMe Specification Version (Identify): 1.3 00:24:11.448 Maximum Queue Entries: 1024 00:24:11.448 Contiguous Queues Required: No 00:24:11.448 Arbitration Mechanisms Supported 00:24:11.448 Weighted Round Robin: Not Supported 00:24:11.448 Vendor Specific: Not Supported 00:24:11.448 Reset Timeout: 7500 ms 00:24:11.448 Doorbell Stride: 4 bytes 00:24:11.448 NVM Subsystem Reset: Not Supported 00:24:11.448 Command Sets Supported 00:24:11.448 NVM Command Set: Supported 00:24:11.448 Boot Partition: Not Supported 00:24:11.448 Memory Page Size Minimum: 4096 bytes 00:24:11.448 Memory Page Size Maximum: 4096 bytes 00:24:11.448 Persistent Memory Region: Not Supported 00:24:11.448 Optional Asynchronous Events Supported 00:24:11.448 Namespace Attribute Notices: Not Supported 00:24:11.448 Firmware Activation Notices: Not Supported 00:24:11.448 ANA Change Notices: Not Supported 00:24:11.448 PLE Aggregate Log Change Notices: Not Supported 00:24:11.448 LBA Status Info Alert Notices: Not Supported 00:24:11.448 EGE Aggregate Log Change Notices: Not Supported 00:24:11.448 Normal NVM Subsystem Shutdown event: Not Supported 00:24:11.448 Zone Descriptor Change Notices: Not Supported 00:24:11.448 Discovery Log Change Notices: Supported 00:24:11.448 Controller Attributes 00:24:11.448 128-bit Host Identifier: Not Supported 00:24:11.448 Non-Operational Permissive Mode: Not Supported 00:24:11.448 NVM Sets: Not Supported 00:24:11.448 Read Recovery Levels: Not Supported 00:24:11.448 Endurance Groups: Not Supported 00:24:11.448 Predictable Latency Mode: Not Supported 00:24:11.448 Traffic Based Keep ALive: Not Supported 00:24:11.448 Namespace Granularity: Not Supported 00:24:11.448 SQ Associations: Not Supported 00:24:11.448 UUID List: Not Supported 00:24:11.448 Multi-Domain Subsystem: Not Supported 00:24:11.448 Fixed Capacity Management: Not Supported 00:24:11.448 Variable Capacity Management: Not Supported 00:24:11.448 Delete Endurance Group: Not Supported 00:24:11.448 Delete NVM Set: Not Supported 00:24:11.448 Extended LBA Formats Supported: Not Supported 00:24:11.448 Flexible Data Placement Supported: Not Supported 00:24:11.448 00:24:11.448 Controller Memory Buffer Support 00:24:11.448 ================================ 00:24:11.448 Supported: No 00:24:11.448 00:24:11.448 Persistent Memory Region Support 00:24:11.448 ================================ 00:24:11.448 Supported: No 00:24:11.448 00:24:11.448 Admin Command Set Attributes 00:24:11.448 ============================ 00:24:11.448 Security Send/Receive: Not Supported 00:24:11.448 Format NVM: Not Supported 00:24:11.448 Firmware Activate/Download: Not Supported 00:24:11.448 Namespace Management: Not Supported 00:24:11.448 Device Self-Test: Not Supported 00:24:11.448 Directives: Not Supported 00:24:11.448 NVMe-MI: Not Supported 00:24:11.448 Virtualization Management: Not Supported 00:24:11.448 Doorbell Buffer Config: Not Supported 00:24:11.448 Get LBA Status Capability: Not Supported 00:24:11.448 Command & Feature Lockdown Capability: Not Supported 00:24:11.448 Abort Command Limit: 1 00:24:11.448 Async Event Request Limit: 1 00:24:11.448 Number of Firmware Slots: N/A 00:24:11.448 Firmware Slot 1 Read-Only: N/A 00:24:11.448 Firmware Activation Without Reset: N/A 00:24:11.448 Multiple Update Detection Support: N/A 00:24:11.448 Firmware Update Granularity: No Information Provided 00:24:11.449 Per-Namespace SMART Log: No 00:24:11.449 Asymmetric Namespace Access Log Page: Not Supported 00:24:11.449 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:11.449 Command Effects Log Page: Not Supported 00:24:11.449 Get Log Page Extended Data: Supported 00:24:11.449 Telemetry Log Pages: Not Supported 00:24:11.449 Persistent Event Log Pages: Not Supported 00:24:11.449 Supported Log Pages Log Page: May Support 00:24:11.449 Commands Supported & Effects Log Page: Not Supported 00:24:11.449 Feature Identifiers & Effects Log Page:May Support 00:24:11.449 NVMe-MI Commands & Effects Log Page: May Support 00:24:11.449 Data Area 4 for Telemetry Log: Not Supported 00:24:11.449 Error Log Page Entries Supported: 1 00:24:11.449 Keep Alive: Not Supported 00:24:11.449 00:24:11.449 NVM Command Set Attributes 00:24:11.449 ========================== 00:24:11.449 Submission Queue Entry Size 00:24:11.449 Max: 1 00:24:11.449 Min: 1 00:24:11.449 Completion Queue Entry Size 00:24:11.449 Max: 1 00:24:11.449 Min: 1 00:24:11.449 Number of Namespaces: 0 00:24:11.449 Compare Command: Not Supported 00:24:11.449 Write Uncorrectable Command: Not Supported 00:24:11.449 Dataset Management Command: Not Supported 00:24:11.449 Write Zeroes Command: Not Supported 00:24:11.449 Set Features Save Field: Not Supported 00:24:11.449 Reservations: Not Supported 00:24:11.449 Timestamp: Not Supported 00:24:11.449 Copy: Not Supported 00:24:11.449 Volatile Write Cache: Not Present 00:24:11.449 Atomic Write Unit (Normal): 1 00:24:11.449 Atomic Write Unit (PFail): 1 00:24:11.449 Atomic Compare & Write Unit: 1 00:24:11.449 Fused Compare & Write: Not Supported 00:24:11.449 Scatter-Gather List 00:24:11.449 SGL Command Set: Supported 00:24:11.449 SGL Keyed: Not Supported 00:24:11.449 SGL Bit Bucket Descriptor: Not Supported 00:24:11.449 SGL Metadata Pointer: Not Supported 00:24:11.449 Oversized SGL: Not Supported 00:24:11.449 SGL Metadata Address: Not Supported 00:24:11.449 SGL Offset: Supported 00:24:11.449 Transport SGL Data Block: Not Supported 00:24:11.449 Replay Protected Memory Block: Not Supported 00:24:11.449 00:24:11.449 Firmware Slot Information 00:24:11.449 ========================= 00:24:11.449 Active slot: 0 00:24:11.449 00:24:11.449 00:24:11.449 Error Log 00:24:11.449 ========= 00:24:11.449 00:24:11.449 Active Namespaces 00:24:11.449 ================= 00:24:11.449 Discovery Log Page 00:24:11.449 ================== 00:24:11.449 Generation Counter: 2 00:24:11.449 Number of Records: 2 00:24:11.449 Record Format: 0 00:24:11.449 00:24:11.449 Discovery Log Entry 0 00:24:11.449 ---------------------- 00:24:11.449 Transport Type: 3 (TCP) 00:24:11.449 Address Family: 1 (IPv4) 00:24:11.449 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:11.449 Entry Flags: 00:24:11.449 Duplicate Returned Information: 0 00:24:11.449 Explicit Persistent Connection Support for Discovery: 0 00:24:11.449 Transport Requirements: 00:24:11.449 Secure Channel: Not Specified 00:24:11.449 Port ID: 1 (0x0001) 00:24:11.449 Controller ID: 65535 (0xffff) 00:24:11.449 Admin Max SQ Size: 32 00:24:11.449 Transport Service Identifier: 4420 00:24:11.449 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:11.449 Transport Address: 10.0.0.1 00:24:11.449 Discovery Log Entry 1 00:24:11.449 ---------------------- 00:24:11.449 Transport Type: 3 (TCP) 00:24:11.449 Address Family: 1 (IPv4) 00:24:11.449 Subsystem Type: 2 (NVM Subsystem) 00:24:11.449 Entry Flags: 00:24:11.449 Duplicate Returned Information: 0 00:24:11.449 Explicit Persistent Connection Support for Discovery: 0 00:24:11.449 Transport Requirements: 00:24:11.449 Secure Channel: Not Specified 00:24:11.449 Port ID: 1 (0x0001) 00:24:11.449 Controller ID: 65535 (0xffff) 00:24:11.449 Admin Max SQ Size: 32 00:24:11.449 Transport Service Identifier: 4420 00:24:11.449 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:11.449 Transport Address: 10.0.0.1 00:24:11.449 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:11.449 get_feature(0x01) failed 00:24:11.449 get_feature(0x02) failed 00:24:11.449 get_feature(0x04) failed 00:24:11.449 ===================================================== 00:24:11.449 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:11.449 ===================================================== 00:24:11.449 Controller Capabilities/Features 00:24:11.449 ================================ 00:24:11.449 Vendor ID: 0000 00:24:11.449 Subsystem Vendor ID: 0000 00:24:11.449 Serial Number: 8d43e89c2419aff67a02 00:24:11.449 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:11.449 Firmware Version: 6.8.9-20 00:24:11.449 Recommended Arb Burst: 6 00:24:11.449 IEEE OUI Identifier: 00 00 00 00:24:11.449 Multi-path I/O 00:24:11.449 May have multiple subsystem ports: Yes 00:24:11.449 May have multiple controllers: Yes 00:24:11.449 Associated with SR-IOV VF: No 00:24:11.449 Max Data Transfer Size: Unlimited 00:24:11.449 Max Number of Namespaces: 1024 00:24:11.449 Max Number of I/O Queues: 128 00:24:11.449 NVMe Specification Version (VS): 1.3 00:24:11.449 NVMe Specification Version (Identify): 1.3 00:24:11.449 Maximum Queue Entries: 1024 00:24:11.449 Contiguous Queues Required: No 00:24:11.449 Arbitration Mechanisms Supported 00:24:11.449 Weighted Round Robin: Not Supported 00:24:11.449 Vendor Specific: Not Supported 00:24:11.449 Reset Timeout: 7500 ms 00:24:11.449 Doorbell Stride: 4 bytes 00:24:11.449 NVM Subsystem Reset: Not Supported 00:24:11.449 Command Sets Supported 00:24:11.449 NVM Command Set: Supported 00:24:11.449 Boot Partition: Not Supported 00:24:11.449 Memory Page Size Minimum: 4096 bytes 00:24:11.449 Memory Page Size Maximum: 4096 bytes 00:24:11.449 Persistent Memory Region: Not Supported 00:24:11.449 Optional Asynchronous Events Supported 00:24:11.449 Namespace Attribute Notices: Supported 00:24:11.449 Firmware Activation Notices: Not Supported 00:24:11.449 ANA Change Notices: Supported 00:24:11.449 PLE Aggregate Log Change Notices: Not Supported 00:24:11.449 LBA Status Info Alert Notices: Not Supported 00:24:11.449 EGE Aggregate Log Change Notices: Not Supported 00:24:11.449 Normal NVM Subsystem Shutdown event: Not Supported 00:24:11.449 Zone Descriptor Change Notices: Not Supported 00:24:11.449 Discovery Log Change Notices: Not Supported 00:24:11.449 Controller Attributes 00:24:11.449 128-bit Host Identifier: Supported 00:24:11.449 Non-Operational Permissive Mode: Not Supported 00:24:11.449 NVM Sets: Not Supported 00:24:11.449 Read Recovery Levels: Not Supported 00:24:11.449 Endurance Groups: Not Supported 00:24:11.449 Predictable Latency Mode: Not Supported 00:24:11.449 Traffic Based Keep ALive: Supported 00:24:11.449 Namespace Granularity: Not Supported 00:24:11.449 SQ Associations: Not Supported 00:24:11.449 UUID List: Not Supported 00:24:11.449 Multi-Domain Subsystem: Not Supported 00:24:11.449 Fixed Capacity Management: Not Supported 00:24:11.449 Variable Capacity Management: Not Supported 00:24:11.449 Delete Endurance Group: Not Supported 00:24:11.449 Delete NVM Set: Not Supported 00:24:11.449 Extended LBA Formats Supported: Not Supported 00:24:11.449 Flexible Data Placement Supported: Not Supported 00:24:11.449 00:24:11.449 Controller Memory Buffer Support 00:24:11.449 ================================ 00:24:11.449 Supported: No 00:24:11.449 00:24:11.449 Persistent Memory Region Support 00:24:11.449 ================================ 00:24:11.449 Supported: No 00:24:11.449 00:24:11.449 Admin Command Set Attributes 00:24:11.449 ============================ 00:24:11.449 Security Send/Receive: Not Supported 00:24:11.449 Format NVM: Not Supported 00:24:11.449 Firmware Activate/Download: Not Supported 00:24:11.449 Namespace Management: Not Supported 00:24:11.449 Device Self-Test: Not Supported 00:24:11.449 Directives: Not Supported 00:24:11.449 NVMe-MI: Not Supported 00:24:11.449 Virtualization Management: Not Supported 00:24:11.449 Doorbell Buffer Config: Not Supported 00:24:11.449 Get LBA Status Capability: Not Supported 00:24:11.449 Command & Feature Lockdown Capability: Not Supported 00:24:11.449 Abort Command Limit: 4 00:24:11.449 Async Event Request Limit: 4 00:24:11.449 Number of Firmware Slots: N/A 00:24:11.449 Firmware Slot 1 Read-Only: N/A 00:24:11.449 Firmware Activation Without Reset: N/A 00:24:11.449 Multiple Update Detection Support: N/A 00:24:11.449 Firmware Update Granularity: No Information Provided 00:24:11.449 Per-Namespace SMART Log: Yes 00:24:11.449 Asymmetric Namespace Access Log Page: Supported 00:24:11.449 ANA Transition Time : 10 sec 00:24:11.449 00:24:11.449 Asymmetric Namespace Access Capabilities 00:24:11.449 ANA Optimized State : Supported 00:24:11.449 ANA Non-Optimized State : Supported 00:24:11.449 ANA Inaccessible State : Supported 00:24:11.449 ANA Persistent Loss State : Supported 00:24:11.449 ANA Change State : Supported 00:24:11.449 ANAGRPID is not changed : No 00:24:11.450 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:11.450 00:24:11.450 ANA Group Identifier Maximum : 128 00:24:11.450 Number of ANA Group Identifiers : 128 00:24:11.450 Max Number of Allowed Namespaces : 1024 00:24:11.450 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:11.450 Command Effects Log Page: Supported 00:24:11.450 Get Log Page Extended Data: Supported 00:24:11.450 Telemetry Log Pages: Not Supported 00:24:11.450 Persistent Event Log Pages: Not Supported 00:24:11.450 Supported Log Pages Log Page: May Support 00:24:11.450 Commands Supported & Effects Log Page: Not Supported 00:24:11.450 Feature Identifiers & Effects Log Page:May Support 00:24:11.450 NVMe-MI Commands & Effects Log Page: May Support 00:24:11.450 Data Area 4 for Telemetry Log: Not Supported 00:24:11.450 Error Log Page Entries Supported: 128 00:24:11.450 Keep Alive: Supported 00:24:11.450 Keep Alive Granularity: 1000 ms 00:24:11.450 00:24:11.450 NVM Command Set Attributes 00:24:11.450 ========================== 00:24:11.450 Submission Queue Entry Size 00:24:11.450 Max: 64 00:24:11.450 Min: 64 00:24:11.450 Completion Queue Entry Size 00:24:11.450 Max: 16 00:24:11.450 Min: 16 00:24:11.450 Number of Namespaces: 1024 00:24:11.450 Compare Command: Not Supported 00:24:11.450 Write Uncorrectable Command: Not Supported 00:24:11.450 Dataset Management Command: Supported 00:24:11.450 Write Zeroes Command: Supported 00:24:11.450 Set Features Save Field: Not Supported 00:24:11.450 Reservations: Not Supported 00:24:11.450 Timestamp: Not Supported 00:24:11.450 Copy: Not Supported 00:24:11.450 Volatile Write Cache: Present 00:24:11.450 Atomic Write Unit (Normal): 1 00:24:11.450 Atomic Write Unit (PFail): 1 00:24:11.450 Atomic Compare & Write Unit: 1 00:24:11.450 Fused Compare & Write: Not Supported 00:24:11.450 Scatter-Gather List 00:24:11.450 SGL Command Set: Supported 00:24:11.450 SGL Keyed: Not Supported 00:24:11.450 SGL Bit Bucket Descriptor: Not Supported 00:24:11.450 SGL Metadata Pointer: Not Supported 00:24:11.450 Oversized SGL: Not Supported 00:24:11.450 SGL Metadata Address: Not Supported 00:24:11.450 SGL Offset: Supported 00:24:11.450 Transport SGL Data Block: Not Supported 00:24:11.450 Replay Protected Memory Block: Not Supported 00:24:11.450 00:24:11.450 Firmware Slot Information 00:24:11.450 ========================= 00:24:11.450 Active slot: 0 00:24:11.450 00:24:11.450 Asymmetric Namespace Access 00:24:11.450 =========================== 00:24:11.450 Change Count : 0 00:24:11.450 Number of ANA Group Descriptors : 1 00:24:11.450 ANA Group Descriptor : 0 00:24:11.450 ANA Group ID : 1 00:24:11.450 Number of NSID Values : 1 00:24:11.450 Change Count : 0 00:24:11.450 ANA State : 1 00:24:11.450 Namespace Identifier : 1 00:24:11.450 00:24:11.450 Commands Supported and Effects 00:24:11.450 ============================== 00:24:11.450 Admin Commands 00:24:11.450 -------------- 00:24:11.450 Get Log Page (02h): Supported 00:24:11.450 Identify (06h): Supported 00:24:11.450 Abort (08h): Supported 00:24:11.450 Set Features (09h): Supported 00:24:11.450 Get Features (0Ah): Supported 00:24:11.450 Asynchronous Event Request (0Ch): Supported 00:24:11.450 Keep Alive (18h): Supported 00:24:11.450 I/O Commands 00:24:11.450 ------------ 00:24:11.450 Flush (00h): Supported 00:24:11.450 Write (01h): Supported LBA-Change 00:24:11.450 Read (02h): Supported 00:24:11.450 Write Zeroes (08h): Supported LBA-Change 00:24:11.450 Dataset Management (09h): Supported 00:24:11.450 00:24:11.450 Error Log 00:24:11.450 ========= 00:24:11.450 Entry: 0 00:24:11.450 Error Count: 0x3 00:24:11.450 Submission Queue Id: 0x0 00:24:11.450 Command Id: 0x5 00:24:11.450 Phase Bit: 0 00:24:11.450 Status Code: 0x2 00:24:11.450 Status Code Type: 0x0 00:24:11.450 Do Not Retry: 1 00:24:11.450 Error Location: 0x28 00:24:11.450 LBA: 0x0 00:24:11.450 Namespace: 0x0 00:24:11.450 Vendor Log Page: 0x0 00:24:11.450 ----------- 00:24:11.450 Entry: 1 00:24:11.450 Error Count: 0x2 00:24:11.450 Submission Queue Id: 0x0 00:24:11.450 Command Id: 0x5 00:24:11.450 Phase Bit: 0 00:24:11.450 Status Code: 0x2 00:24:11.450 Status Code Type: 0x0 00:24:11.450 Do Not Retry: 1 00:24:11.450 Error Location: 0x28 00:24:11.450 LBA: 0x0 00:24:11.450 Namespace: 0x0 00:24:11.450 Vendor Log Page: 0x0 00:24:11.450 ----------- 00:24:11.450 Entry: 2 00:24:11.450 Error Count: 0x1 00:24:11.450 Submission Queue Id: 0x0 00:24:11.450 Command Id: 0x4 00:24:11.450 Phase Bit: 0 00:24:11.450 Status Code: 0x2 00:24:11.450 Status Code Type: 0x0 00:24:11.450 Do Not Retry: 1 00:24:11.450 Error Location: 0x28 00:24:11.450 LBA: 0x0 00:24:11.450 Namespace: 0x0 00:24:11.450 Vendor Log Page: 0x0 00:24:11.450 00:24:11.450 Number of Queues 00:24:11.450 ================ 00:24:11.450 Number of I/O Submission Queues: 128 00:24:11.450 Number of I/O Completion Queues: 128 00:24:11.450 00:24:11.450 ZNS Specific Controller Data 00:24:11.450 ============================ 00:24:11.450 Zone Append Size Limit: 0 00:24:11.450 00:24:11.450 00:24:11.450 Active Namespaces 00:24:11.450 ================= 00:24:11.450 get_feature(0x05) failed 00:24:11.450 Namespace ID:1 00:24:11.450 Command Set Identifier: NVM (00h) 00:24:11.450 Deallocate: Supported 00:24:11.450 Deallocated/Unwritten Error: Not Supported 00:24:11.450 Deallocated Read Value: Unknown 00:24:11.450 Deallocate in Write Zeroes: Not Supported 00:24:11.450 Deallocated Guard Field: 0xFFFF 00:24:11.450 Flush: Supported 00:24:11.450 Reservation: Not Supported 00:24:11.450 Namespace Sharing Capabilities: Multiple Controllers 00:24:11.450 Size (in LBAs): 1953525168 (931GiB) 00:24:11.450 Capacity (in LBAs): 1953525168 (931GiB) 00:24:11.450 Utilization (in LBAs): 1953525168 (931GiB) 00:24:11.450 UUID: efb0fff2-ea23-473b-865f-09482031ddc5 00:24:11.450 Thin Provisioning: Not Supported 00:24:11.450 Per-NS Atomic Units: Yes 00:24:11.450 Atomic Boundary Size (Normal): 0 00:24:11.450 Atomic Boundary Size (PFail): 0 00:24:11.450 Atomic Boundary Offset: 0 00:24:11.450 NGUID/EUI64 Never Reused: No 00:24:11.450 ANA group ID: 1 00:24:11.450 Namespace Write Protected: No 00:24:11.450 Number of LBA Formats: 1 00:24:11.450 Current LBA Format: LBA Format #00 00:24:11.450 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:11.450 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:11.450 rmmod nvme_tcp 00:24:11.450 rmmod nvme_fabrics 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.450 04:12:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.984 04:12:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:13.984 04:12:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:13.984 04:12:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:13.984 04:12:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:13.984 04:12:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:13.984 04:12:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:13.984 04:12:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:13.984 04:12:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:13.984 04:12:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:13.984 04:12:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:13.984 04:12:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:14.923 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:14.923 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:14.923 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:14.923 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:14.923 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:14.923 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:14.923 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:14.923 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:14.923 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:14.923 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:14.923 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:14.923 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:14.923 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:14.923 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:14.923 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:14.923 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:15.862 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:15.862 00:24:15.862 real 0m9.781s 00:24:15.862 user 0m2.103s 00:24:15.862 sys 0m3.661s 00:24:15.862 04:12:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:15.862 04:12:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.862 ************************************ 00:24:15.862 END TEST nvmf_identify_kernel_target 00:24:15.862 ************************************ 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.120 ************************************ 00:24:16.120 START TEST nvmf_auth_host 00:24:16.120 ************************************ 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:16.120 * Looking for test storage... 00:24:16.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.120 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:16.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.120 --rc genhtml_branch_coverage=1 00:24:16.120 --rc genhtml_function_coverage=1 00:24:16.120 --rc genhtml_legend=1 00:24:16.120 --rc geninfo_all_blocks=1 00:24:16.120 --rc geninfo_unexecuted_blocks=1 00:24:16.121 00:24:16.121 ' 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:16.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.121 --rc genhtml_branch_coverage=1 00:24:16.121 --rc genhtml_function_coverage=1 00:24:16.121 --rc genhtml_legend=1 00:24:16.121 --rc geninfo_all_blocks=1 00:24:16.121 --rc geninfo_unexecuted_blocks=1 00:24:16.121 00:24:16.121 ' 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:16.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.121 --rc genhtml_branch_coverage=1 00:24:16.121 --rc genhtml_function_coverage=1 00:24:16.121 --rc genhtml_legend=1 00:24:16.121 --rc geninfo_all_blocks=1 00:24:16.121 --rc geninfo_unexecuted_blocks=1 00:24:16.121 00:24:16.121 ' 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:16.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.121 --rc genhtml_branch_coverage=1 00:24:16.121 --rc genhtml_function_coverage=1 00:24:16.121 --rc genhtml_legend=1 00:24:16.121 --rc geninfo_all_blocks=1 00:24:16.121 --rc geninfo_unexecuted_blocks=1 00:24:16.121 00:24:16.121 ' 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:16.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:16.121 04:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:18.665 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:18.665 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:18.665 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:18.665 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:18.666 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:18.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:24:18.666 00:24:18.666 --- 10.0.0.2 ping statistics --- 00:24:18.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.666 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:18.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:24:18.666 00:24:18.666 --- 10.0.0.1 ping statistics --- 00:24:18.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.666 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2486648 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2486648 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2486648 ']' 00:24:18.666 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.667 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.667 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.667 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.667 04:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cb86f4faf30c39c10360847be2516bea 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.G40 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cb86f4faf30c39c10360847be2516bea 0 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cb86f4faf30c39c10360847be2516bea 0 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cb86f4faf30c39c10360847be2516bea 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.G40 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.G40 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.G40 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b7b8b7532e285ec87b2ce7adf7f3866a586a69da3eca78346ac8b86d67410978 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.SpY 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b7b8b7532e285ec87b2ce7adf7f3866a586a69da3eca78346ac8b86d67410978 3 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b7b8b7532e285ec87b2ce7adf7f3866a586a69da3eca78346ac8b86d67410978 3 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b7b8b7532e285ec87b2ce7adf7f3866a586a69da3eca78346ac8b86d67410978 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.SpY 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.SpY 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.SpY 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5ff6bb4ddc4aa171c3dfcfdb3355fa3cf0d445ff23629541 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.65C 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5ff6bb4ddc4aa171c3dfcfdb3355fa3cf0d445ff23629541 0 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5ff6bb4ddc4aa171c3dfcfdb3355fa3cf0d445ff23629541 0 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5ff6bb4ddc4aa171c3dfcfdb3355fa3cf0d445ff23629541 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.65C 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.65C 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.65C 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ee0cb4399e9d476a01c320ba36709f366c52a7ae86885453 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.hTl 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ee0cb4399e9d476a01c320ba36709f366c52a7ae86885453 2 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ee0cb4399e9d476a01c320ba36709f366c52a7ae86885453 2 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ee0cb4399e9d476a01c320ba36709f366c52a7ae86885453 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.hTl 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.hTl 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.hTl 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f97fb9014877c4101a7d2905067c71b5 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.QAq 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f97fb9014877c4101a7d2905067c71b5 1 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f97fb9014877c4101a7d2905067c71b5 1 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f97fb9014877c4101a7d2905067c71b5 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:18.931 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:19.262 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.QAq 00:24:19.262 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.QAq 00:24:19.262 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.QAq 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6d8ebf6bf6947ae9c739d9818deaa80b 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.34a 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6d8ebf6bf6947ae9c739d9818deaa80b 1 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6d8ebf6bf6947ae9c739d9818deaa80b 1 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6d8ebf6bf6947ae9c739d9818deaa80b 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.34a 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.34a 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.34a 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=89bcefa04173363b23ca464e6012754817f977615f588090 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ZJJ 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 89bcefa04173363b23ca464e6012754817f977615f588090 2 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 89bcefa04173363b23ca464e6012754817f977615f588090 2 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=89bcefa04173363b23ca464e6012754817f977615f588090 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ZJJ 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ZJJ 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ZJJ 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9792bad5d44ad7b9a9067a3f471d2e79 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yFd 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9792bad5d44ad7b9a9067a3f471d2e79 0 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9792bad5d44ad7b9a9067a3f471d2e79 0 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9792bad5d44ad7b9a9067a3f471d2e79 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yFd 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yFd 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.yFd 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5e88249435c3f9efc62ddbf3638d762f301a5b554cd7e01dc7a272fe6ce13f26 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.PkW 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5e88249435c3f9efc62ddbf3638d762f301a5b554cd7e01dc7a272fe6ce13f26 3 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5e88249435c3f9efc62ddbf3638d762f301a5b554cd7e01dc7a272fe6ce13f26 3 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5e88249435c3f9efc62ddbf3638d762f301a5b554cd7e01dc7a272fe6ce13f26 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.PkW 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.PkW 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.PkW 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2486648 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2486648 ']' 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:19.263 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.G40 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.SpY ]] 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SpY 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.65C 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.hTl ]] 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hTl 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.QAq 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.34a ]] 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.34a 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ZJJ 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.yFd ]] 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.yFd 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.550 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.PkW 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:19.551 04:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:20.928 Waiting for block devices as requested 00:24:20.928 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:20.928 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:20.928 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:20.928 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:21.187 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:21.187 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:21.187 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:21.187 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:21.444 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:21.444 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:21.444 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:21.444 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:21.702 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:21.702 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:21.702 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:21.702 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:21.961 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:22.219 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:22.219 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:22.219 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:22.219 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:22.219 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:22.219 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:22.219 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:22.219 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:22.219 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:22.478 No valid GPT data, bailing 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:22.478 00:24:22.478 Discovery Log Number of Records 2, Generation counter 2 00:24:22.478 =====Discovery Log Entry 0====== 00:24:22.478 trtype: tcp 00:24:22.478 adrfam: ipv4 00:24:22.478 subtype: current discovery subsystem 00:24:22.478 treq: not specified, sq flow control disable supported 00:24:22.478 portid: 1 00:24:22.478 trsvcid: 4420 00:24:22.478 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:22.478 traddr: 10.0.0.1 00:24:22.478 eflags: none 00:24:22.478 sectype: none 00:24:22.478 =====Discovery Log Entry 1====== 00:24:22.478 trtype: tcp 00:24:22.478 adrfam: ipv4 00:24:22.478 subtype: nvme subsystem 00:24:22.478 treq: not specified, sq flow control disable supported 00:24:22.478 portid: 1 00:24:22.478 trsvcid: 4420 00:24:22.478 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:22.478 traddr: 10.0.0.1 00:24:22.478 eflags: none 00:24:22.478 sectype: none 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.478 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.739 nvme0n1 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: ]] 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.739 04:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.999 nvme0n1 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.999 nvme0n1 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.999 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:23.257 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.258 nvme0n1 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.258 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: ]] 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.517 nvme0n1 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:23.517 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.518 04:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.778 nvme0n1 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: ]] 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.778 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.038 nvme0n1 00:24:24.038 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.038 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.038 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.038 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.038 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.038 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.039 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.299 nvme0n1 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.299 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.300 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.300 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.300 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.300 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.300 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.559 nvme0n1 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: ]] 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.559 04:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.817 nvme0n1 00:24:24.817 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.817 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.817 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.817 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.817 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.817 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.817 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.817 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.817 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.818 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.076 nvme0n1 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: ]] 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.076 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.336 nvme0n1 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:25.336 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.337 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.905 nvme0n1 00:24:25.905 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.905 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.905 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.905 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.905 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.905 04:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.905 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.906 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.166 nvme0n1 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: ]] 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.166 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.167 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.167 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:26.167 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.167 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.427 nvme0n1 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.427 04:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.686 nvme0n1 00:24:26.686 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.686 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.686 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.686 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.686 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.686 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.686 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.686 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.686 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.686 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: ]] 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.944 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.945 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.945 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.945 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.945 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.945 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.514 nvme0n1 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.514 04:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.773 nvme0n1 00:24:27.773 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.773 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.773 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.773 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.773 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.773 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.031 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.032 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.032 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.032 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.032 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.032 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.032 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.032 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.032 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.032 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:28.032 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.032 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.598 nvme0n1 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: ]] 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.599 04:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.165 nvme0n1 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.165 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.166 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.424 nvme0n1 00:24:29.424 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.424 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.424 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.424 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.424 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.424 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.683 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.683 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.683 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: ]] 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.684 04:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.623 nvme0n1 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.623 04:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.563 nvme0n1 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.563 04:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.502 nvme0n1 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: ]] 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.502 04:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.444 nvme0n1 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.444 04:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.013 nvme0n1 00:24:34.013 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.013 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.013 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.013 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.013 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.013 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.271 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.271 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.271 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.271 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.271 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.271 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:34.271 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.271 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.271 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:34.271 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.271 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:34.271 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.271 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: ]] 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.272 nvme0n1 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.272 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.532 nvme0n1 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.532 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.533 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.533 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.533 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.533 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.533 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.533 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.533 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.533 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.533 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.533 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.533 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.533 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.533 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.533 04:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.794 nvme0n1 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: ]] 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.794 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.054 nvme0n1 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.054 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.055 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.055 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.055 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.055 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.055 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.055 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.315 nvme0n1 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: ]] 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:35.315 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.316 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.575 nvme0n1 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.575 04:12:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.834 nvme0n1 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.834 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.093 nvme0n1 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: ]] 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.093 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.352 nvme0n1 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.352 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.611 nvme0n1 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: ]] 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.611 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.612 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.612 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.612 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.612 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.612 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.612 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.612 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.612 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.612 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.612 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:36.612 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.612 04:12:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.872 nvme0n1 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.872 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.132 nvme0n1 00:24:37.132 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.132 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.132 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.132 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.132 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.132 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.132 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.132 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.132 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.132 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.132 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.132 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.132 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:37.132 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.133 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.391 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.391 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.391 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.391 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.391 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.391 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.391 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.391 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.391 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.391 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.391 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.391 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.391 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:37.391 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.391 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.651 nvme0n1 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: ]] 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.651 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.652 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.652 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.652 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.652 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.652 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.652 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.652 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.652 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.652 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.652 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.652 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:37.652 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.652 04:12:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.912 nvme0n1 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.912 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.172 nvme0n1 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: ]] 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.172 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.430 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:38.430 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.430 04:12:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.688 nvme0n1 00:24:38.688 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.688 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.688 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.688 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.688 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.688 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.947 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.947 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.947 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.948 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.516 nvme0n1 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.516 04:12:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.775 nvme0n1 00:24:39.775 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.775 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.775 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.775 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.775 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: ]] 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.035 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.036 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.607 nvme0n1 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.607 04:12:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.174 nvme0n1 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:41.174 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: ]] 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.175 04:12:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.164 nvme0n1 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:42.164 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.165 04:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.735 nvme0n1 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.735 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.994 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.994 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.994 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.994 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.994 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.994 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.994 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.994 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.994 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.994 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.994 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.994 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.994 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:42.994 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.994 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.928 nvme0n1 00:24:43.928 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.928 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.928 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.928 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.928 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.928 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.928 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.928 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.928 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.928 04:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: ]] 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.928 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.497 nvme0n1 00:24:44.497 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.497 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.497 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.497 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.497 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.497 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.756 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.756 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.756 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.756 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.757 04:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.695 nvme0n1 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: ]] 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.695 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.696 nvme0n1 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.696 04:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.696 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.956 nvme0n1 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:45.956 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.957 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.217 nvme0n1 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: ]] 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.217 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.476 nvme0n1 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.476 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.477 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.477 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.477 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.477 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.477 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.477 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.477 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:46.477 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.477 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.735 nvme0n1 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: ]] 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.735 04:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.994 nvme0n1 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.994 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.252 nvme0n1 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:47.252 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.253 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.511 nvme0n1 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: ]] 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.511 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.770 nvme0n1 00:24:47.770 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.770 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.770 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.770 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.770 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.770 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.770 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.770 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.770 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.770 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.770 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.770 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.770 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.771 04:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.031 nvme0n1 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: ]] 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.031 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.032 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.032 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.032 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.032 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.032 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.032 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:48.032 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.032 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.293 nvme0n1 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.293 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.554 nvme0n1 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.554 04:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.813 nvme0n1 00:24:48.813 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.813 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.813 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.813 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.813 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.813 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: ]] 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.074 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.335 nvme0n1 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.335 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.596 nvme0n1 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: ]] 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.596 04:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.165 nvme0n1 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.165 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.734 nvme0n1 00:24:50.734 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.734 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.734 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.734 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.734 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.734 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.734 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.734 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.734 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.734 04:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.734 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.301 nvme0n1 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: ]] 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.301 04:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.871 nvme0n1 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.871 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.441 nvme0n1 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I4NmY0ZmFmMzBjMzljMTAzNjA4NDdiZTI1MTZiZWG0bdAV: 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: ]] 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdiOGI3NTMyZTI4NWVjODdiMmNlN2FkZjdmMzg2NmE1ODZhNjlkYTNlY2E3ODM0NmFjOGI4NmQ2NzQxMDk3OO43C4I=: 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:52.441 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:52.442 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.442 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.442 04:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.382 nvme0n1 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:53.382 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:53.383 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.383 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.324 nvme0n1 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.324 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.261 nvme0n1 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODliY2VmYTA0MTczMzYzYjIzY2E0NjRlNjAxMjc1NDgxN2Y5Nzc2MTVmNTg4MDkwor4Z+Q==: 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: ]] 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5MmJhZDVkNDRhZDdiOWE5MDY3YTNmNDcxZDJlNzn5VSru: 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.261 04:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.198 nvme0n1 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWU4ODI0OTQzNWMzZjllZmM2MmRkYmYzNjM4ZDc2MmYzMDFhNWI1NTRjZDdlMDFkYzdhMjcyZmU2Y2UxM2YyNhd+DZ4=: 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.198 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.199 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.199 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.199 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.199 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.199 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.199 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.199 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.199 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.199 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:56.199 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.199 04:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.136 nvme0n1 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.136 request: 00:24:57.136 { 00:24:57.136 "name": "nvme0", 00:24:57.136 "trtype": "tcp", 00:24:57.136 "traddr": "10.0.0.1", 00:24:57.136 "adrfam": "ipv4", 00:24:57.136 "trsvcid": "4420", 00:24:57.136 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:57.136 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:57.136 "prchk_reftag": false, 00:24:57.136 "prchk_guard": false, 00:24:57.136 "hdgst": false, 00:24:57.136 "ddgst": false, 00:24:57.136 "allow_unrecognized_csi": false, 00:24:57.136 "method": "bdev_nvme_attach_controller", 00:24:57.136 "req_id": 1 00:24:57.136 } 00:24:57.136 Got JSON-RPC error response 00:24:57.136 response: 00:24:57.136 { 00:24:57.136 "code": -5, 00:24:57.136 "message": "Input/output error" 00:24:57.136 } 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.136 request: 00:24:57.136 { 00:24:57.136 "name": "nvme0", 00:24:57.136 "trtype": "tcp", 00:24:57.136 "traddr": "10.0.0.1", 00:24:57.136 "adrfam": "ipv4", 00:24:57.136 "trsvcid": "4420", 00:24:57.136 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:57.136 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:57.136 "prchk_reftag": false, 00:24:57.136 "prchk_guard": false, 00:24:57.136 "hdgst": false, 00:24:57.136 "ddgst": false, 00:24:57.136 "dhchap_key": "key2", 00:24:57.136 "allow_unrecognized_csi": false, 00:24:57.136 "method": "bdev_nvme_attach_controller", 00:24:57.136 "req_id": 1 00:24:57.136 } 00:24:57.136 Got JSON-RPC error response 00:24:57.136 response: 00:24:57.136 { 00:24:57.136 "code": -5, 00:24:57.136 "message": "Input/output error" 00:24:57.136 } 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:57.136 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:57.137 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.137 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.137 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:57.137 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.137 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.397 request: 00:24:57.397 { 00:24:57.397 "name": "nvme0", 00:24:57.397 "trtype": "tcp", 00:24:57.397 "traddr": "10.0.0.1", 00:24:57.397 "adrfam": "ipv4", 00:24:57.397 "trsvcid": "4420", 00:24:57.397 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:57.397 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:57.397 "prchk_reftag": false, 00:24:57.397 "prchk_guard": false, 00:24:57.397 "hdgst": false, 00:24:57.397 "ddgst": false, 00:24:57.397 "dhchap_key": "key1", 00:24:57.397 "dhchap_ctrlr_key": "ckey2", 00:24:57.397 "allow_unrecognized_csi": false, 00:24:57.397 "method": "bdev_nvme_attach_controller", 00:24:57.397 "req_id": 1 00:24:57.397 } 00:24:57.397 Got JSON-RPC error response 00:24:57.397 response: 00:24:57.397 { 00:24:57.397 "code": -5, 00:24:57.397 "message": "Input/output error" 00:24:57.397 } 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.397 nvme0n1 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.397 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.658 request: 00:24:57.658 { 00:24:57.658 "name": "nvme0", 00:24:57.658 "dhchap_key": "key1", 00:24:57.658 "dhchap_ctrlr_key": "ckey2", 00:24:57.658 "method": "bdev_nvme_set_keys", 00:24:57.658 "req_id": 1 00:24:57.658 } 00:24:57.658 Got JSON-RPC error response 00:24:57.658 response: 00:24:57.658 { 00:24:57.658 "code": -13, 00:24:57.658 "message": "Permission denied" 00:24:57.658 } 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:57.658 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.659 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.659 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:57.659 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.659 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.659 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:57.659 04:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:59.035 04:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.035 04:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:59.035 04:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.035 04:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.035 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.035 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:59.035 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWZmNmJiNGRkYzRhYTE3MWMzZGZjZmRiMzM1NWZhM2NmMGQ0NDVmZjIzNjI5NTQxxlYxbg==: 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: ]] 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWUwY2I0Mzk5ZTlkNDc2YTAxYzMyMGJhMzY3MDlmMzY2YzUyYTdhZTg2ODg1NDUzrLgG7g==: 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.036 nvme0n1 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk3ZmI5MDE0ODc3YzQxMDFhN2QyOTA1MDY3YzcxYjULCADE: 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: ]] 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ4ZWJmNmJmNjk0N2FlOWM3MzlkOTgxOGRlYWE4MGIM30kT: 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.036 request: 00:24:59.036 { 00:24:59.036 "name": "nvme0", 00:24:59.036 "dhchap_key": "key2", 00:24:59.036 "dhchap_ctrlr_key": "ckey1", 00:24:59.036 "method": "bdev_nvme_set_keys", 00:24:59.036 "req_id": 1 00:24:59.036 } 00:24:59.036 Got JSON-RPC error response 00:24:59.036 response: 00:24:59.036 { 00:24:59.036 "code": -13, 00:24:59.036 "message": "Permission denied" 00:24:59.036 } 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:59.036 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:59.972 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.972 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:59.972 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.972 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.972 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.972 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:59.972 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:59.972 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:59.972 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:59.972 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:59.972 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:59.972 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:59.972 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:59.972 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:59.972 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:59.972 rmmod nvme_tcp 00:24:59.972 rmmod nvme_fabrics 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2486648 ']' 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2486648 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2486648 ']' 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2486648 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2486648 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2486648' 00:25:00.231 killing process with pid 2486648 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2486648 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2486648 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.231 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.768 04:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:02.768 04:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:02.768 04:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:02.768 04:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:02.768 04:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:02.768 04:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:02.768 04:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:02.768 04:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:02.768 04:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:02.768 04:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:02.768 04:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:02.768 04:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:02.768 04:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:03.706 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:03.706 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:03.706 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:03.706 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:03.706 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:03.706 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:03.706 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:03.706 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:03.706 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:03.706 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:03.706 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:03.706 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:03.706 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:03.706 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:03.706 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:03.706 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:04.644 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:25:04.905 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.G40 /tmp/spdk.key-null.65C /tmp/spdk.key-sha256.QAq /tmp/spdk.key-sha384.ZJJ /tmp/spdk.key-sha512.PkW /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:04.905 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:06.286 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:06.286 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:06.286 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:06.286 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:06.286 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:06.286 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:06.286 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:06.286 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:06.286 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:06.286 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:06.286 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:06.286 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:06.286 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:06.286 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:06.286 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:06.286 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:06.286 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:06.286 00:25:06.286 real 0m50.167s 00:25:06.286 user 0m47.821s 00:25:06.286 sys 0m6.151s 00:25:06.286 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:06.286 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.286 ************************************ 00:25:06.286 END TEST nvmf_auth_host 00:25:06.286 ************************************ 00:25:06.286 04:13:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:06.286 04:13:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 ************************************ 00:25:06.287 START TEST nvmf_digest 00:25:06.287 ************************************ 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:06.287 * Looking for test storage... 00:25:06.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:06.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.287 --rc genhtml_branch_coverage=1 00:25:06.287 --rc genhtml_function_coverage=1 00:25:06.287 --rc genhtml_legend=1 00:25:06.287 --rc geninfo_all_blocks=1 00:25:06.287 --rc geninfo_unexecuted_blocks=1 00:25:06.287 00:25:06.287 ' 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:06.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.287 --rc genhtml_branch_coverage=1 00:25:06.287 --rc genhtml_function_coverage=1 00:25:06.287 --rc genhtml_legend=1 00:25:06.287 --rc geninfo_all_blocks=1 00:25:06.287 --rc geninfo_unexecuted_blocks=1 00:25:06.287 00:25:06.287 ' 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:06.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.287 --rc genhtml_branch_coverage=1 00:25:06.287 --rc genhtml_function_coverage=1 00:25:06.287 --rc genhtml_legend=1 00:25:06.287 --rc geninfo_all_blocks=1 00:25:06.287 --rc geninfo_unexecuted_blocks=1 00:25:06.287 00:25:06.287 ' 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:06.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.287 --rc genhtml_branch_coverage=1 00:25:06.287 --rc genhtml_function_coverage=1 00:25:06.287 --rc genhtml_legend=1 00:25:06.287 --rc geninfo_all_blocks=1 00:25:06.287 --rc geninfo_unexecuted_blocks=1 00:25:06.287 00:25:06.287 ' 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:06.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:06.287 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:06.288 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:06.288 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:06.288 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.288 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:06.288 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:06.288 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:06.288 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.288 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.288 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.288 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:06.288 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:06.288 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:06.288 04:13:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:08.824 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:08.824 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:08.824 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:08.824 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.824 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:25:08.824 00:25:08.824 --- 10.0.0.2 ping statistics --- 00:25:08.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.825 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:25:08.825 00:25:08.825 --- 10.0.0.1 ping statistics --- 00:25:08.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.825 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:08.825 ************************************ 00:25:08.825 START TEST nvmf_digest_clean 00:25:08.825 ************************************ 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2496121 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2496121 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2496121 ']' 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.825 04:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:08.825 [2024-12-10 04:13:03.020810] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:08.825 [2024-12-10 04:13:03.020896] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.825 [2024-12-10 04:13:03.091924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.825 [2024-12-10 04:13:03.145157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.825 [2024-12-10 04:13:03.145248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.825 [2024-12-10 04:13:03.145262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.825 [2024-12-10 04:13:03.145273] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.825 [2024-12-10 04:13:03.145281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.825 [2024-12-10 04:13:03.145890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.083 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.083 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:09.083 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:09.083 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:09.083 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:09.083 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.083 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:09.083 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:09.083 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:09.084 null0 00:25:09.084 [2024-12-10 04:13:03.385043] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.084 [2024-12-10 04:13:03.409283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2496140 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2496140 /var/tmp/bperf.sock 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2496140 ']' 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:09.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.084 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:09.084 [2024-12-10 04:13:03.462824] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:09.084 [2024-12-10 04:13:03.462926] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496140 ] 00:25:09.342 [2024-12-10 04:13:03.529984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.342 [2024-12-10 04:13:03.590000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.342 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.342 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:09.342 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:09.342 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:09.342 04:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:09.975 04:13:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:09.975 04:13:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:10.233 nvme0n1 00:25:10.233 04:13:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:10.234 04:13:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:10.234 Running I/O for 2 seconds... 00:25:12.551 18905.00 IOPS, 73.85 MiB/s [2024-12-10T03:13:06.940Z] 18758.50 IOPS, 73.28 MiB/s 00:25:12.551 Latency(us) 00:25:12.551 [2024-12-10T03:13:06.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.551 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:12.551 nvme0n1 : 2.01 18783.05 73.37 0.00 0.00 6806.01 3155.44 16602.45 00:25:12.551 [2024-12-10T03:13:06.940Z] =================================================================================================================== 00:25:12.551 [2024-12-10T03:13:06.940Z] Total : 18783.05 73.37 0.00 0.00 6806.01 3155.44 16602.45 00:25:12.551 { 00:25:12.551 "results": [ 00:25:12.551 { 00:25:12.551 "job": "nvme0n1", 00:25:12.551 "core_mask": "0x2", 00:25:12.551 "workload": "randread", 00:25:12.551 "status": "finished", 00:25:12.551 "queue_depth": 128, 00:25:12.551 "io_size": 4096, 00:25:12.551 "runtime": 2.005425, 00:25:12.551 "iops": 18783.050974232396, 00:25:12.551 "mibps": 73.3712928680953, 00:25:12.551 "io_failed": 0, 00:25:12.551 "io_timeout": 0, 00:25:12.551 "avg_latency_us": 6806.0077700297725, 00:25:12.551 "min_latency_us": 3155.437037037037, 00:25:12.551 "max_latency_us": 16602.453333333335 00:25:12.551 } 00:25:12.551 ], 00:25:12.551 "core_count": 1 00:25:12.551 } 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:12.551 | select(.opcode=="crc32c") 00:25:12.551 | "\(.module_name) \(.executed)"' 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2496140 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2496140 ']' 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2496140 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2496140 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2496140' 00:25:12.551 killing process with pid 2496140 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2496140 00:25:12.551 Received shutdown signal, test time was about 2.000000 seconds 00:25:12.551 00:25:12.551 Latency(us) 00:25:12.551 [2024-12-10T03:13:06.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.551 [2024-12-10T03:13:06.940Z] =================================================================================================================== 00:25:12.551 [2024-12-10T03:13:06.940Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:12.551 04:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2496140 00:25:12.809 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:12.809 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:12.809 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:12.809 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:12.809 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:12.809 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:12.809 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:12.809 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2496645 00:25:12.809 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:12.809 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2496645 /var/tmp/bperf.sock 00:25:12.809 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2496645 ']' 00:25:12.809 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:12.809 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:12.809 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:12.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:12.809 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:12.809 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:12.809 [2024-12-10 04:13:07.099367] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:12.810 [2024-12-10 04:13:07.099448] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496645 ] 00:25:12.810 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:12.810 Zero copy mechanism will not be used. 00:25:12.810 [2024-12-10 04:13:07.166748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.067 [2024-12-10 04:13:07.223727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.068 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.068 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:13.068 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:13.068 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:13.068 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:13.325 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.325 04:13:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.891 nvme0n1 00:25:13.891 04:13:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:13.891 04:13:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:13.891 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:13.891 Zero copy mechanism will not be used. 00:25:13.891 Running I/O for 2 seconds... 00:25:16.206 6179.00 IOPS, 772.38 MiB/s [2024-12-10T03:13:10.595Z] 6009.00 IOPS, 751.12 MiB/s 00:25:16.206 Latency(us) 00:25:16.206 [2024-12-10T03:13:10.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.206 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:16.206 nvme0n1 : 2.00 6007.43 750.93 0.00 0.00 2659.28 697.84 5048.70 00:25:16.206 [2024-12-10T03:13:10.595Z] =================================================================================================================== 00:25:16.206 [2024-12-10T03:13:10.595Z] Total : 6007.43 750.93 0.00 0.00 2659.28 697.84 5048.70 00:25:16.206 { 00:25:16.206 "results": [ 00:25:16.206 { 00:25:16.206 "job": "nvme0n1", 00:25:16.206 "core_mask": "0x2", 00:25:16.206 "workload": "randread", 00:25:16.206 "status": "finished", 00:25:16.206 "queue_depth": 16, 00:25:16.206 "io_size": 131072, 00:25:16.206 "runtime": 2.003185, 00:25:16.206 "iops": 6007.433162688419, 00:25:16.206 "mibps": 750.9291453360523, 00:25:16.206 "io_failed": 0, 00:25:16.206 "io_timeout": 0, 00:25:16.206 "avg_latency_us": 2659.283607063936, 00:25:16.206 "min_latency_us": 697.837037037037, 00:25:16.206 "max_latency_us": 5048.69925925926 00:25:16.206 } 00:25:16.206 ], 00:25:16.206 "core_count": 1 00:25:16.206 } 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:16.206 | select(.opcode=="crc32c") 00:25:16.206 | "\(.module_name) \(.executed)"' 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2496645 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2496645 ']' 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2496645 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2496645 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2496645' 00:25:16.206 killing process with pid 2496645 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2496645 00:25:16.206 Received shutdown signal, test time was about 2.000000 seconds 00:25:16.206 00:25:16.206 Latency(us) 00:25:16.206 [2024-12-10T03:13:10.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.206 [2024-12-10T03:13:10.595Z] =================================================================================================================== 00:25:16.206 [2024-12-10T03:13:10.595Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.206 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2496645 00:25:16.465 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:16.465 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:16.465 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:16.465 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:16.465 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:16.465 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:16.465 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:16.465 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2497084 00:25:16.465 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:16.465 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2497084 /var/tmp/bperf.sock 00:25:16.465 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2497084 ']' 00:25:16.465 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:16.465 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.465 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:16.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:16.465 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.465 04:13:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:16.465 [2024-12-10 04:13:10.798716] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:16.465 [2024-12-10 04:13:10.798795] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497084 ] 00:25:16.723 [2024-12-10 04:13:10.864145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.723 [2024-12-10 04:13:10.918077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.723 04:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.723 04:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:16.723 04:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:16.723 04:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:16.723 04:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:17.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:17.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:17.548 nvme0n1 00:25:17.548 04:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:17.548 04:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:17.549 Running I/O for 2 seconds... 00:25:19.867 21949.00 IOPS, 85.74 MiB/s [2024-12-10T03:13:14.256Z] 20797.00 IOPS, 81.24 MiB/s 00:25:19.867 Latency(us) 00:25:19.867 [2024-12-10T03:13:14.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.867 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:19.867 nvme0n1 : 2.01 20789.57 81.21 0.00 0.00 6143.14 2633.58 12524.66 00:25:19.867 [2024-12-10T03:13:14.256Z] =================================================================================================================== 00:25:19.867 [2024-12-10T03:13:14.256Z] Total : 20789.57 81.21 0.00 0.00 6143.14 2633.58 12524.66 00:25:19.867 { 00:25:19.867 "results": [ 00:25:19.867 { 00:25:19.867 "job": "nvme0n1", 00:25:19.867 "core_mask": "0x2", 00:25:19.867 "workload": "randwrite", 00:25:19.867 "status": "finished", 00:25:19.867 "queue_depth": 128, 00:25:19.867 "io_size": 4096, 00:25:19.867 "runtime": 2.006487, 00:25:19.867 "iops": 20789.569032841977, 00:25:19.867 "mibps": 81.20925403453897, 00:25:19.867 "io_failed": 0, 00:25:19.867 "io_timeout": 0, 00:25:19.867 "avg_latency_us": 6143.14070693026, 00:25:19.867 "min_latency_us": 2633.5762962962963, 00:25:19.867 "max_latency_us": 12524.657777777778 00:25:19.867 } 00:25:19.867 ], 00:25:19.867 "core_count": 1 00:25:19.867 } 00:25:19.867 04:13:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:19.867 04:13:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:19.867 04:13:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:19.867 04:13:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:19.867 | select(.opcode=="crc32c") 00:25:19.867 | "\(.module_name) \(.executed)"' 00:25:19.867 04:13:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:19.867 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:19.867 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:19.867 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:19.867 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:19.867 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2497084 00:25:19.867 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2497084 ']' 00:25:19.867 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2497084 00:25:19.867 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:19.867 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.867 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2497084 00:25:19.867 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:19.867 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:19.867 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2497084' 00:25:19.867 killing process with pid 2497084 00:25:19.867 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2497084 00:25:19.867 Received shutdown signal, test time was about 2.000000 seconds 00:25:19.867 00:25:19.867 Latency(us) 00:25:19.867 [2024-12-10T03:13:14.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.867 [2024-12-10T03:13:14.256Z] =================================================================================================================== 00:25:19.867 [2024-12-10T03:13:14.256Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.867 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2497084 00:25:20.126 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:20.126 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:20.126 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:20.126 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:20.126 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:20.126 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:20.126 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:20.126 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2497488 00:25:20.126 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:20.126 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2497488 /var/tmp/bperf.sock 00:25:20.126 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2497488 ']' 00:25:20.126 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:20.126 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.126 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:20.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:20.126 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.126 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:20.126 [2024-12-10 04:13:14.461254] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:20.126 [2024-12-10 04:13:14.461334] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497488 ] 00:25:20.126 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:20.126 Zero copy mechanism will not be used. 00:25:20.384 [2024-12-10 04:13:14.529130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.384 [2024-12-10 04:13:14.582783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.384 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:20.384 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:20.384 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:20.384 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:20.384 04:13:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:20.952 04:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:20.952 04:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:21.210 nvme0n1 00:25:21.210 04:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:21.210 04:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:21.468 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:21.468 Zero copy mechanism will not be used. 00:25:21.468 Running I/O for 2 seconds... 00:25:23.342 5759.00 IOPS, 719.88 MiB/s [2024-12-10T03:13:17.731Z] 6025.50 IOPS, 753.19 MiB/s 00:25:23.342 Latency(us) 00:25:23.342 [2024-12-10T03:13:17.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.342 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:23.342 nvme0n1 : 2.00 6022.44 752.81 0.00 0.00 2649.39 1978.22 7815.77 00:25:23.342 [2024-12-10T03:13:17.731Z] =================================================================================================================== 00:25:23.342 [2024-12-10T03:13:17.731Z] Total : 6022.44 752.81 0.00 0.00 2649.39 1978.22 7815.77 00:25:23.342 { 00:25:23.342 "results": [ 00:25:23.342 { 00:25:23.342 "job": "nvme0n1", 00:25:23.342 "core_mask": "0x2", 00:25:23.342 "workload": "randwrite", 00:25:23.342 "status": "finished", 00:25:23.342 "queue_depth": 16, 00:25:23.342 "io_size": 131072, 00:25:23.342 "runtime": 2.004337, 00:25:23.342 "iops": 6022.440338126772, 00:25:23.342 "mibps": 752.8050422658465, 00:25:23.342 "io_failed": 0, 00:25:23.342 "io_timeout": 0, 00:25:23.342 "avg_latency_us": 2649.390882709402, 00:25:23.342 "min_latency_us": 1978.2162962962964, 00:25:23.342 "max_latency_us": 7815.774814814815 00:25:23.342 } 00:25:23.342 ], 00:25:23.342 "core_count": 1 00:25:23.342 } 00:25:23.342 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:23.342 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:23.342 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:23.342 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:23.342 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:23.342 | select(.opcode=="crc32c") 00:25:23.342 | "\(.module_name) \(.executed)"' 00:25:23.600 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:23.600 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:23.600 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:23.600 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:23.600 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2497488 00:25:23.600 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2497488 ']' 00:25:23.600 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2497488 00:25:23.600 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:23.600 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:23.600 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2497488 00:25:23.600 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:23.600 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:23.600 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2497488' 00:25:23.600 killing process with pid 2497488 00:25:23.600 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2497488 00:25:23.600 Received shutdown signal, test time was about 2.000000 seconds 00:25:23.600 00:25:23.600 Latency(us) 00:25:23.600 [2024-12-10T03:13:17.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.600 [2024-12-10T03:13:17.989Z] =================================================================================================================== 00:25:23.600 [2024-12-10T03:13:17.989Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:23.600 04:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2497488 00:25:23.858 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2496121 00:25:23.858 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2496121 ']' 00:25:23.858 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2496121 00:25:23.858 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:23.858 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:23.858 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2496121 00:25:23.858 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:23.858 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:23.858 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2496121' 00:25:23.858 killing process with pid 2496121 00:25:23.858 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2496121 00:25:23.858 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2496121 00:25:24.117 00:25:24.117 real 0m15.489s 00:25:24.117 user 0m31.114s 00:25:24.117 sys 0m4.166s 00:25:24.117 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.117 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:24.117 ************************************ 00:25:24.117 END TEST nvmf_digest_clean 00:25:24.117 ************************************ 00:25:24.117 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:24.117 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:24.117 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:24.117 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:24.377 ************************************ 00:25:24.377 START TEST nvmf_digest_error 00:25:24.377 ************************************ 00:25:24.377 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:25:24.377 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:24.377 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:24.377 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:24.377 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.377 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2498042 00:25:24.377 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:24.377 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2498042 00:25:24.377 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2498042 ']' 00:25:24.377 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.377 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.377 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.377 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.377 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.377 [2024-12-10 04:13:18.564482] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:24.377 [2024-12-10 04:13:18.564581] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.377 [2024-12-10 04:13:18.639935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.377 [2024-12-10 04:13:18.698648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.377 [2024-12-10 04:13:18.698738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.377 [2024-12-10 04:13:18.698751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.377 [2024-12-10 04:13:18.698777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.377 [2024-12-10 04:13:18.698787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.377 [2024-12-10 04:13:18.699380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.636 [2024-12-10 04:13:18.824189] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.636 null0 00:25:24.636 [2024-12-10 04:13:18.942369] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.636 [2024-12-10 04:13:18.966646] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2498071 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2498071 /var/tmp/bperf.sock 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2498071 ']' 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:24.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.636 04:13:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.636 [2024-12-10 04:13:19.012617] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:24.636 [2024-12-10 04:13:19.012694] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498071 ] 00:25:24.894 [2024-12-10 04:13:19.078542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.894 [2024-12-10 04:13:19.135233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.894 04:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:24.894 04:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:24.894 04:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:24.894 04:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:25.152 04:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:25.152 04:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.152 04:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:25.152 04:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.152 04:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:25.152 04:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:25.721 nvme0n1 00:25:25.721 04:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:25.721 04:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.721 04:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:25.721 04:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.721 04:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:25.721 04:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:25.721 Running I/O for 2 seconds... 00:25:25.721 [2024-12-10 04:13:20.067141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.721 [2024-12-10 04:13:20.067211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.721 [2024-12-10 04:13:20.067233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.721 [2024-12-10 04:13:20.087526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.721 [2024-12-10 04:13:20.087582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.721 [2024-12-10 04:13:20.087602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.721 [2024-12-10 04:13:20.102494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.721 [2024-12-10 04:13:20.102526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.721 [2024-12-10 04:13:20.102569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.118741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.118788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.118806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.132574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.132608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.132626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.149021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.149051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.149083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.164430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.164476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.164494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.175416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.175443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.175474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.190920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.190963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.190981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.204151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.204182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.204200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.219235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.219264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.219297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.233721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.233752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.233770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.244710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.244740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.244758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.258995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.259025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.259057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.271068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.271096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.271127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.285156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.285183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.285213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.298906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.298943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.298961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.311665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.311699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.311717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.323092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.323121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.323153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.337266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.337298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.337331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.982 [2024-12-10 04:13:20.353472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:25.982 [2024-12-10 04:13:20.353501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.982 [2024-12-10 04:13:20.353531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.241 [2024-12-10 04:13:20.367103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.241 [2024-12-10 04:13:20.367149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.241 [2024-12-10 04:13:20.367167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.241 [2024-12-10 04:13:20.378465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.241 [2024-12-10 04:13:20.378493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.241 [2024-12-10 04:13:20.378524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.241 [2024-12-10 04:13:20.397477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.241 [2024-12-10 04:13:20.397523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.241 [2024-12-10 04:13:20.397541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.242 [2024-12-10 04:13:20.408664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.242 [2024-12-10 04:13:20.408692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.242 [2024-12-10 04:13:20.408724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.242 [2024-12-10 04:13:20.423977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.242 [2024-12-10 04:13:20.424005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.242 [2024-12-10 04:13:20.424036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.242 [2024-12-10 04:13:20.439986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.242 [2024-12-10 04:13:20.440015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.242 [2024-12-10 04:13:20.440046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.242 [2024-12-10 04:13:20.455077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.242 [2024-12-10 04:13:20.455108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.242 [2024-12-10 04:13:20.455141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.242 [2024-12-10 04:13:20.466933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.242 [2024-12-10 04:13:20.466961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.242 [2024-12-10 04:13:20.466993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.242 [2024-12-10 04:13:20.480574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.242 [2024-12-10 04:13:20.480603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.242 [2024-12-10 04:13:20.480634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.242 [2024-12-10 04:13:20.493332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.242 [2024-12-10 04:13:20.493360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.242 [2024-12-10 04:13:20.493390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.242 [2024-12-10 04:13:20.506843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.242 [2024-12-10 04:13:20.506889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.242 [2024-12-10 04:13:20.506907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.242 [2024-12-10 04:13:20.519792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.242 [2024-12-10 04:13:20.519822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.242 [2024-12-10 04:13:20.519839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.242 [2024-12-10 04:13:20.532047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.242 [2024-12-10 04:13:20.532076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.242 [2024-12-10 04:13:20.532114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.242 [2024-12-10 04:13:20.545026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.242 [2024-12-10 04:13:20.545053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.242 [2024-12-10 04:13:20.545084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.242 [2024-12-10 04:13:20.561269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.242 [2024-12-10 04:13:20.561297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.242 [2024-12-10 04:13:20.561326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.242 [2024-12-10 04:13:20.574574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.242 [2024-12-10 04:13:20.574605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.242 [2024-12-10 04:13:20.574623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.242 [2024-12-10 04:13:20.588945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.242 [2024-12-10 04:13:20.588977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.242 [2024-12-10 04:13:20.589010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.242 [2024-12-10 04:13:20.602181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.242 [2024-12-10 04:13:20.602227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.242 [2024-12-10 04:13:20.602246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.242 [2024-12-10 04:13:20.613852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.242 [2024-12-10 04:13:20.613884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.242 [2024-12-10 04:13:20.613902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.501 [2024-12-10 04:13:20.626268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.501 [2024-12-10 04:13:20.626297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.501 [2024-12-10 04:13:20.626328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.501 [2024-12-10 04:13:20.640129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.501 [2024-12-10 04:13:20.640160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.501 [2024-12-10 04:13:20.640192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.501 [2024-12-10 04:13:20.654575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.501 [2024-12-10 04:13:20.654613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.501 [2024-12-10 04:13:20.654631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.501 [2024-12-10 04:13:20.665931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.501 [2024-12-10 04:13:20.665959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.501 [2024-12-10 04:13:20.665989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.501 [2024-12-10 04:13:20.679034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.501 [2024-12-10 04:13:20.679065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.501 [2024-12-10 04:13:20.679097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.501 [2024-12-10 04:13:20.692348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.501 [2024-12-10 04:13:20.692379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.501 [2024-12-10 04:13:20.692411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.501 [2024-12-10 04:13:20.708443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.501 [2024-12-10 04:13:20.708472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.501 [2024-12-10 04:13:20.708503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.501 [2024-12-10 04:13:20.725284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.501 [2024-12-10 04:13:20.725313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.501 [2024-12-10 04:13:20.725344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.501 [2024-12-10 04:13:20.739619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.502 [2024-12-10 04:13:20.739652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.502 [2024-12-10 04:13:20.739670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.502 [2024-12-10 04:13:20.754977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.502 [2024-12-10 04:13:20.755005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.502 [2024-12-10 04:13:20.755036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.502 [2024-12-10 04:13:20.769610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.502 [2024-12-10 04:13:20.769641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.502 [2024-12-10 04:13:20.769658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.502 [2024-12-10 04:13:20.783990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.502 [2024-12-10 04:13:20.784023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.502 [2024-12-10 04:13:20.784040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.502 [2024-12-10 04:13:20.795372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.502 [2024-12-10 04:13:20.795417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.502 [2024-12-10 04:13:20.795433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.502 [2024-12-10 04:13:20.811846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.502 [2024-12-10 04:13:20.811875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.502 [2024-12-10 04:13:20.811906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.502 [2024-12-10 04:13:20.824904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.502 [2024-12-10 04:13:20.824932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.502 [2024-12-10 04:13:20.824963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.502 [2024-12-10 04:13:20.837620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.502 [2024-12-10 04:13:20.837652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.502 [2024-12-10 04:13:20.837669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.502 [2024-12-10 04:13:20.852428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.502 [2024-12-10 04:13:20.852472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.502 [2024-12-10 04:13:20.852488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.502 [2024-12-10 04:13:20.866914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.502 [2024-12-10 04:13:20.866945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.502 [2024-12-10 04:13:20.866977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.502 [2024-12-10 04:13:20.878556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.502 [2024-12-10 04:13:20.878615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.502 [2024-12-10 04:13:20.878633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.762 [2024-12-10 04:13:20.894038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.762 [2024-12-10 04:13:20.894068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.762 [2024-12-10 04:13:20.894106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.762 [2024-12-10 04:13:20.909842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.762 [2024-12-10 04:13:20.909887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.762 [2024-12-10 04:13:20.909902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.762 [2024-12-10 04:13:20.922942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.762 [2024-12-10 04:13:20.922972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.762 [2024-12-10 04:13:20.923004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.762 [2024-12-10 04:13:20.936991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.762 [2024-12-10 04:13:20.937021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.762 [2024-12-10 04:13:20.937039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.762 [2024-12-10 04:13:20.948270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.762 [2024-12-10 04:13:20.948297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.762 [2024-12-10 04:13:20.948328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.762 [2024-12-10 04:13:20.961254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.762 [2024-12-10 04:13:20.961282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.762 [2024-12-10 04:13:20.961313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.762 [2024-12-10 04:13:20.974854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.762 [2024-12-10 04:13:20.974883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.762 [2024-12-10 04:13:20.974914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.762 [2024-12-10 04:13:20.987171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.762 [2024-12-10 04:13:20.987199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.762 [2024-12-10 04:13:20.987231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.762 [2024-12-10 04:13:21.001787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.762 [2024-12-10 04:13:21.001817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.762 [2024-12-10 04:13:21.001848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.762 [2024-12-10 04:13:21.015903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.762 [2024-12-10 04:13:21.015932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.762 [2024-12-10 04:13:21.015963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.762 [2024-12-10 04:13:21.028202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.762 [2024-12-10 04:13:21.028230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.762 [2024-12-10 04:13:21.028262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.762 [2024-12-10 04:13:21.043787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.762 [2024-12-10 04:13:21.043816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.762 [2024-12-10 04:13:21.043832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.762 18034.00 IOPS, 70.45 MiB/s [2024-12-10T03:13:21.152Z] [2024-12-10 04:13:21.059620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.763 [2024-12-10 04:13:21.059651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.763 [2024-12-10 04:13:21.059684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.763 [2024-12-10 04:13:21.075989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.763 [2024-12-10 04:13:21.076017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.763 [2024-12-10 04:13:21.076048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.763 [2024-12-10 04:13:21.090457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.763 [2024-12-10 04:13:21.090486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.763 [2024-12-10 04:13:21.090519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.763 [2024-12-10 04:13:21.105227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.763 [2024-12-10 04:13:21.105258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.763 [2024-12-10 04:13:21.105291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.763 [2024-12-10 04:13:21.119856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.763 [2024-12-10 04:13:21.119888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.763 [2024-12-10 04:13:21.119906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.763 [2024-12-10 04:13:21.131453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:26.763 [2024-12-10 04:13:21.131481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.763 [2024-12-10 04:13:21.131521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.023 [2024-12-10 04:13:21.146623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.023 [2024-12-10 04:13:21.146653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.023 [2024-12-10 04:13:21.146685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.023 [2024-12-10 04:13:21.162167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.023 [2024-12-10 04:13:21.162211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.023 [2024-12-10 04:13:21.162229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.023 [2024-12-10 04:13:21.173950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.023 [2024-12-10 04:13:21.173978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.023 [2024-12-10 04:13:21.174008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.023 [2024-12-10 04:13:21.188830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.023 [2024-12-10 04:13:21.188884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.023 [2024-12-10 04:13:21.188901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.023 [2024-12-10 04:13:21.204559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.023 [2024-12-10 04:13:21.204589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.023 [2024-12-10 04:13:21.204622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.023 [2024-12-10 04:13:21.220864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.023 [2024-12-10 04:13:21.220894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.023 [2024-12-10 04:13:21.220926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.023 [2024-12-10 04:13:21.231364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.024 [2024-12-10 04:13:21.231390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.024 [2024-12-10 04:13:21.231421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.024 [2024-12-10 04:13:21.244749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.024 [2024-12-10 04:13:21.244779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.024 [2024-12-10 04:13:21.244797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.024 [2024-12-10 04:13:21.261366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.024 [2024-12-10 04:13:21.261418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.024 [2024-12-10 04:13:21.261437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.024 [2024-12-10 04:13:21.274762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.024 [2024-12-10 04:13:21.274791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.024 [2024-12-10 04:13:21.274823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.024 [2024-12-10 04:13:21.286261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.024 [2024-12-10 04:13:21.286288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.024 [2024-12-10 04:13:21.286319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.024 [2024-12-10 04:13:21.300540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.024 [2024-12-10 04:13:21.300579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.024 [2024-12-10 04:13:21.300596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.024 [2024-12-10 04:13:21.314211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.024 [2024-12-10 04:13:21.314239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.024 [2024-12-10 04:13:21.314270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.024 [2024-12-10 04:13:21.328707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.024 [2024-12-10 04:13:21.328738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.024 [2024-12-10 04:13:21.328770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.024 [2024-12-10 04:13:21.344312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.024 [2024-12-10 04:13:21.344344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.024 [2024-12-10 04:13:21.344378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.024 [2024-12-10 04:13:21.355697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.024 [2024-12-10 04:13:21.355727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.024 [2024-12-10 04:13:21.355759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.024 [2024-12-10 04:13:21.370779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.024 [2024-12-10 04:13:21.370809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.024 [2024-12-10 04:13:21.370827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.024 [2024-12-10 04:13:21.386526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.024 [2024-12-10 04:13:21.386576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.024 [2024-12-10 04:13:21.386594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.024 [2024-12-10 04:13:21.397063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.024 [2024-12-10 04:13:21.397102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.024 [2024-12-10 04:13:21.397134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.412571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.412602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.412634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.425125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.425153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.425184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.441115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.441161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.441180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.455682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.455713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.455730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.467593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.467622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.467653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.480659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.480690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.480707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.494446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.494484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.494517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.505933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.505961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.505991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.522407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.522436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.522467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.533795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.533825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.533859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.549015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.549043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.549074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.564903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.564933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.564949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.577927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.577974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.577991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.590046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.590092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.590109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.602860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.602891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.602909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.616127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.616172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.616188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.631137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.631166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.631197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.647182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.647211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.647243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.285 [2024-12-10 04:13:21.657994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.285 [2024-12-10 04:13:21.658022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.285 [2024-12-10 04:13:21.658053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.544 [2024-12-10 04:13:21.671098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.544 [2024-12-10 04:13:21.671143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.544 [2024-12-10 04:13:21.671159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.544 [2024-12-10 04:13:21.685665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.544 [2024-12-10 04:13:21.685695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.544 [2024-12-10 04:13:21.685726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.544 [2024-12-10 04:13:21.700709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.544 [2024-12-10 04:13:21.700739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.544 [2024-12-10 04:13:21.700771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.544 [2024-12-10 04:13:21.716982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.544 [2024-12-10 04:13:21.717028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.544 [2024-12-10 04:13:21.717047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.544 [2024-12-10 04:13:21.733219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.544 [2024-12-10 04:13:21.733250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.544 [2024-12-10 04:13:21.733293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.544 [2024-12-10 04:13:21.744196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.544 [2024-12-10 04:13:21.744224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.545 [2024-12-10 04:13:21.744256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.545 [2024-12-10 04:13:21.760313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.545 [2024-12-10 04:13:21.760343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.545 [2024-12-10 04:13:21.760375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.545 [2024-12-10 04:13:21.777296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.545 [2024-12-10 04:13:21.777325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.545 [2024-12-10 04:13:21.777357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.545 [2024-12-10 04:13:21.792730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.545 [2024-12-10 04:13:21.792761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.545 [2024-12-10 04:13:21.792794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.545 [2024-12-10 04:13:21.808697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.545 [2024-12-10 04:13:21.808740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.545 [2024-12-10 04:13:21.808758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.545 [2024-12-10 04:13:21.823707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.545 [2024-12-10 04:13:21.823753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.545 [2024-12-10 04:13:21.823770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.545 [2024-12-10 04:13:21.836180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.545 [2024-12-10 04:13:21.836235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.545 [2024-12-10 04:13:21.836252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.545 [2024-12-10 04:13:21.850675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.545 [2024-12-10 04:13:21.850704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.545 [2024-12-10 04:13:21.850734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.545 [2024-12-10 04:13:21.866682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.545 [2024-12-10 04:13:21.866736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.545 [2024-12-10 04:13:21.866754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.545 [2024-12-10 04:13:21.880316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.545 [2024-12-10 04:13:21.880346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.545 [2024-12-10 04:13:21.880380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.545 [2024-12-10 04:13:21.895325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.545 [2024-12-10 04:13:21.895372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.545 [2024-12-10 04:13:21.895389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.545 [2024-12-10 04:13:21.906766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.545 [2024-12-10 04:13:21.906797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.545 [2024-12-10 04:13:21.906814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.545 [2024-12-10 04:13:21.919616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.545 [2024-12-10 04:13:21.919654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.545 [2024-12-10 04:13:21.919671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.803 [2024-12-10 04:13:21.930479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.803 [2024-12-10 04:13:21.930510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.803 [2024-12-10 04:13:21.930541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.803 [2024-12-10 04:13:21.945476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.803 [2024-12-10 04:13:21.945507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.803 [2024-12-10 04:13:21.945539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.803 [2024-12-10 04:13:21.960268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.803 [2024-12-10 04:13:21.960298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.803 [2024-12-10 04:13:21.960329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.803 [2024-12-10 04:13:21.971101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.803 [2024-12-10 04:13:21.971128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.803 [2024-12-10 04:13:21.971159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.803 [2024-12-10 04:13:21.984762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.803 [2024-12-10 04:13:21.984791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.803 [2024-12-10 04:13:21.984825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.804 [2024-12-10 04:13:21.998816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.804 [2024-12-10 04:13:21.998846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.804 [2024-12-10 04:13:21.998877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.804 [2024-12-10 04:13:22.013661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.804 [2024-12-10 04:13:22.013691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.804 [2024-12-10 04:13:22.013723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.804 [2024-12-10 04:13:22.027799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.804 [2024-12-10 04:13:22.027830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.804 [2024-12-10 04:13:22.027847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.804 [2024-12-10 04:13:22.042679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15a2420) 00:25:27.804 [2024-12-10 04:13:22.042710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.804 [2024-12-10 04:13:22.042727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.804 18140.00 IOPS, 70.86 MiB/s 00:25:27.804 Latency(us) 00:25:27.804 [2024-12-10T03:13:22.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.804 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:27.804 nvme0n1 : 2.01 18154.64 70.92 0.00 0.00 7042.03 3349.62 24272.59 00:25:27.804 [2024-12-10T03:13:22.193Z] =================================================================================================================== 00:25:27.804 [2024-12-10T03:13:22.193Z] Total : 18154.64 70.92 0.00 0.00 7042.03 3349.62 24272.59 00:25:27.804 { 00:25:27.804 "results": [ 00:25:27.804 { 00:25:27.804 "job": "nvme0n1", 00:25:27.804 "core_mask": "0x2", 00:25:27.804 "workload": "randread", 00:25:27.804 "status": "finished", 00:25:27.804 "queue_depth": 128, 00:25:27.804 "io_size": 4096, 00:25:27.804 "runtime": 2.005438, 00:25:27.804 "iops": 18154.637540527307, 00:25:27.804 "mibps": 70.91655289268479, 00:25:27.804 "io_failed": 0, 00:25:27.804 "io_timeout": 0, 00:25:27.804 "avg_latency_us": 7042.028924534291, 00:25:27.804 "min_latency_us": 3349.617777777778, 00:25:27.804 "max_latency_us": 24272.59259259259 00:25:27.804 } 00:25:27.804 ], 00:25:27.804 "core_count": 1 00:25:27.804 } 00:25:27.804 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:27.804 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:27.804 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:27.804 | .driver_specific 00:25:27.804 | .nvme_error 00:25:27.804 | .status_code 00:25:27.804 | .command_transient_transport_error' 00:25:27.804 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:28.063 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:25:28.063 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2498071 00:25:28.063 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2498071 ']' 00:25:28.063 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2498071 00:25:28.063 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:28.064 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.064 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2498071 00:25:28.064 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:28.064 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:28.064 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2498071' 00:25:28.064 killing process with pid 2498071 00:25:28.064 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2498071 00:25:28.064 Received shutdown signal, test time was about 2.000000 seconds 00:25:28.064 00:25:28.064 Latency(us) 00:25:28.064 [2024-12-10T03:13:22.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.064 [2024-12-10T03:13:22.453Z] =================================================================================================================== 00:25:28.064 [2024-12-10T03:13:22.453Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:28.064 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2498071 00:25:28.322 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:28.322 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:28.322 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:28.322 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:28.322 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:28.322 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2498475 00:25:28.322 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:28.322 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2498475 /var/tmp/bperf.sock 00:25:28.322 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2498475 ']' 00:25:28.322 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:28.322 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.322 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:28.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:28.322 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.322 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:28.322 [2024-12-10 04:13:22.657318] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:28.322 [2024-12-10 04:13:22.657407] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498475 ] 00:25:28.322 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:28.322 Zero copy mechanism will not be used. 00:25:28.580 [2024-12-10 04:13:22.723812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.580 [2024-12-10 04:13:22.778250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.580 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:28.580 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:28.580 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:28.580 04:13:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:28.838 04:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:28.838 04:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.838 04:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:28.838 04:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.838 04:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:28.838 04:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:29.407 nvme0n1 00:25:29.407 04:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:29.407 04:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.407 04:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:29.407 04:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.407 04:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:29.407 04:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:29.407 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:29.407 Zero copy mechanism will not be used. 00:25:29.407 Running I/O for 2 seconds... 00:25:29.407 [2024-12-10 04:13:23.738615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.407 [2024-12-10 04:13:23.738671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.407 [2024-12-10 04:13:23.738693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.407 [2024-12-10 04:13:23.743590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.407 [2024-12-10 04:13:23.743626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.407 [2024-12-10 04:13:23.743644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.407 [2024-12-10 04:13:23.748224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.407 [2024-12-10 04:13:23.748269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.407 [2024-12-10 04:13:23.748288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.407 [2024-12-10 04:13:23.752826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.407 [2024-12-10 04:13:23.752856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.407 [2024-12-10 04:13:23.752874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.407 [2024-12-10 04:13:23.757493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.407 [2024-12-10 04:13:23.757524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.407 [2024-12-10 04:13:23.757542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.407 [2024-12-10 04:13:23.762250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.407 [2024-12-10 04:13:23.762281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.407 [2024-12-10 04:13:23.762299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.407 [2024-12-10 04:13:23.766982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.407 [2024-12-10 04:13:23.767012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.407 [2024-12-10 04:13:23.767030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.407 [2024-12-10 04:13:23.771520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.407 [2024-12-10 04:13:23.771557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.407 [2024-12-10 04:13:23.771577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.407 [2024-12-10 04:13:23.776084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.407 [2024-12-10 04:13:23.776114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.407 [2024-12-10 04:13:23.776132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.407 [2024-12-10 04:13:23.780670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.407 [2024-12-10 04:13:23.780700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.407 [2024-12-10 04:13:23.780717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.408 [2024-12-10 04:13:23.785201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.408 [2024-12-10 04:13:23.785232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.408 [2024-12-10 04:13:23.785249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.667 [2024-12-10 04:13:23.789779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.667 [2024-12-10 04:13:23.789810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.667 [2024-12-10 04:13:23.789838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.667 [2024-12-10 04:13:23.795276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.667 [2024-12-10 04:13:23.795307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.667 [2024-12-10 04:13:23.795325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.667 [2024-12-10 04:13:23.800286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.667 [2024-12-10 04:13:23.800317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.667 [2024-12-10 04:13:23.800335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.667 [2024-12-10 04:13:23.805723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.805754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.805772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.812974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.813006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.813025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.819844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.819874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.819892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.825723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.825755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.825773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.830191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.830222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.830241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.835124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.835155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.835180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.841053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.841085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.841103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.846830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.846862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.846880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.851948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.851994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.852011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.857519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.857558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.857577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.863453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.863499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.863516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.869417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.869462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.869478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.875531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.875574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.875608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.881329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.881376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.881393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.887232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.887268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.887300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.893132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.893163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.893181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.899121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.899151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.899183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.904961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.904992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.905009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.910631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.910663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.910681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.916451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.916483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.916501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.922326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.922358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.922376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.928025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.928056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.928074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.933988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.934018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.934050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.939957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.939989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.940008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.945501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.945556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.945590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.950963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.668 [2024-12-10 04:13:23.950994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.668 [2024-12-10 04:13:23.951012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.668 [2024-12-10 04:13:23.957817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:23.957863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:23.957881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.669 [2024-12-10 04:13:23.965090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:23.965136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:23.965153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.669 [2024-12-10 04:13:23.971281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:23.971312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:23.971330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.669 [2024-12-10 04:13:23.975886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:23.975917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:23.975935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.669 [2024-12-10 04:13:23.979725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:23.979756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:23.979773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.669 [2024-12-10 04:13:23.984933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:23.984962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:23.985002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.669 [2024-12-10 04:13:23.988356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:23.988386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:23.988403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.669 [2024-12-10 04:13:23.993670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:23.993716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:23.993733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.669 [2024-12-10 04:13:24.001476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:24.001535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:24.001573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.669 [2024-12-10 04:13:24.007649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:24.007682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:24.007700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.669 [2024-12-10 04:13:24.013914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:24.013947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:24.013979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.669 [2024-12-10 04:13:24.020288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:24.020335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:24.020353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.669 [2024-12-10 04:13:24.025637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:24.025670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:24.025688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.669 [2024-12-10 04:13:24.030889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:24.030920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:24.030937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.669 [2024-12-10 04:13:24.036842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:24.036875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:24.036893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.669 [2024-12-10 04:13:24.042726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:24.042759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:24.042778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.669 [2024-12-10 04:13:24.048463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.669 [2024-12-10 04:13:24.048495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.669 [2024-12-10 04:13:24.048527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.930 [2024-12-10 04:13:24.055880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.930 [2024-12-10 04:13:24.055913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.930 [2024-12-10 04:13:24.055931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.930 [2024-12-10 04:13:24.062664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.930 [2024-12-10 04:13:24.062696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.930 [2024-12-10 04:13:24.062714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.930 [2024-12-10 04:13:24.068204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.930 [2024-12-10 04:13:24.068235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.930 [2024-12-10 04:13:24.068269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.930 [2024-12-10 04:13:24.072637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.930 [2024-12-10 04:13:24.072668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.930 [2024-12-10 04:13:24.072686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.930 [2024-12-10 04:13:24.075884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.930 [2024-12-10 04:13:24.075915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.930 [2024-12-10 04:13:24.075933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.930 [2024-12-10 04:13:24.081137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.930 [2024-12-10 04:13:24.081169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.930 [2024-12-10 04:13:24.081193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.930 [2024-12-10 04:13:24.085681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.930 [2024-12-10 04:13:24.085727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.930 [2024-12-10 04:13:24.085744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.930 [2024-12-10 04:13:24.090317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.930 [2024-12-10 04:13:24.090348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.930 [2024-12-10 04:13:24.090366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.930 [2024-12-10 04:13:24.095374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.930 [2024-12-10 04:13:24.095405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.930 [2024-12-10 04:13:24.095437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.930 [2024-12-10 04:13:24.100679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.930 [2024-12-10 04:13:24.100720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.930 [2024-12-10 04:13:24.100739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.930 [2024-12-10 04:13:24.107006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.930 [2024-12-10 04:13:24.107050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.930 [2024-12-10 04:13:24.107067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.930 [2024-12-10 04:13:24.111012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.930 [2024-12-10 04:13:24.111043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.930 [2024-12-10 04:13:24.111060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.930 [2024-12-10 04:13:24.115644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.930 [2024-12-10 04:13:24.115675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.930 [2024-12-10 04:13:24.115692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.930 [2024-12-10 04:13:24.120280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.930 [2024-12-10 04:13:24.120311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.930 [2024-12-10 04:13:24.120328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.930 [2024-12-10 04:13:24.124955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.930 [2024-12-10 04:13:24.124991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.930 [2024-12-10 04:13:24.125008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.930 [2024-12-10 04:13:24.129707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.129738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.129754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.134436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.134466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.134482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.139092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.139122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.139140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.143864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.143894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.143911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.148954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.148985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.149002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.154482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.154514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.154531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.159839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.159883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.159901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.166554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.166585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.166603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.174051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.174083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.174101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.179399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.179430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.179447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.184991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.185022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.185040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.190801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.190834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.190853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.196860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.196892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.196911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.200805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.200836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.200854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.204779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.204810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.204828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.209831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.209863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.209881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.215071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.215103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.215127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.219821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.219852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.219869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.225118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.225150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.225167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.230833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.230865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.230882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.236425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.236457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.236475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.241736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.241768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.241786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.246819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.246851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.246869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.252022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.252056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.252075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.256802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.256835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.256854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.261521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.261564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.261584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.266068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.266099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.266117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.931 [2024-12-10 04:13:24.270770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.931 [2024-12-10 04:13:24.270800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.931 [2024-12-10 04:13:24.270818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.932 [2024-12-10 04:13:24.275344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.932 [2024-12-10 04:13:24.275374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.932 [2024-12-10 04:13:24.275392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.932 [2024-12-10 04:13:24.279778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.932 [2024-12-10 04:13:24.279808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.932 [2024-12-10 04:13:24.279825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.932 [2024-12-10 04:13:24.284311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.932 [2024-12-10 04:13:24.284341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.932 [2024-12-10 04:13:24.284359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.932 [2024-12-10 04:13:24.288898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.932 [2024-12-10 04:13:24.288927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.932 [2024-12-10 04:13:24.288944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:29.932 [2024-12-10 04:13:24.293942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.932 [2024-12-10 04:13:24.293973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.932 [2024-12-10 04:13:24.293991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:29.932 [2024-12-10 04:13:24.298999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.932 [2024-12-10 04:13:24.299030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.932 [2024-12-10 04:13:24.299053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:29.932 [2024-12-10 04:13:24.303739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.932 [2024-12-10 04:13:24.303770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.932 [2024-12-10 04:13:24.303788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:29.932 [2024-12-10 04:13:24.308357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:29.932 [2024-12-10 04:13:24.308388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.932 [2024-12-10 04:13:24.308421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.193 [2024-12-10 04:13:24.313481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.193 [2024-12-10 04:13:24.313529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.193 [2024-12-10 04:13:24.313553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.193 [2024-12-10 04:13:24.318167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.193 [2024-12-10 04:13:24.318199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.193 [2024-12-10 04:13:24.318217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.193 [2024-12-10 04:13:24.322349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.193 [2024-12-10 04:13:24.322378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.193 [2024-12-10 04:13:24.322410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.193 [2024-12-10 04:13:24.327405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.327436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.327453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.332448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.332480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.332512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.338273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.338305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.338322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.343311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.343348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.343380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.348644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.348675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.348692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.354505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.354558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.354593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.360458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.360490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.360524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.367282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.367326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.367342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.372827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.372860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.372877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.376942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.376973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.376990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.381159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.381190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.381208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.384140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.384170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.384188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.389708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.389740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.389759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.395409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.395452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.395470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.401386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.401417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.401450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.408886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.408918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.408937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.414834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.414865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.414882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.420304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.420335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.420352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.425362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.425393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.425410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.430824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.430855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.430873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.436581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.436613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.436641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.443651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.443684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.443702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.451206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.451237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.451269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.458700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.458732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.458750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.465102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.465133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.465151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.470290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.194 [2024-12-10 04:13:24.470335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.194 [2024-12-10 04:13:24.470352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.194 [2024-12-10 04:13:24.474895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.474924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.474955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.479455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.479499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.479516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.483925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.483955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.483972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.488478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.488525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.488541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.493156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.493186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.493203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.497721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.497752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.497769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.503274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.503328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.503370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.508538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.508595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.508614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.513104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.513135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.513166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.517708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.517754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.517773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.523220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.523251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.523268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.527993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.528037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.528053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.532805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.532835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.532866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.537982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.538027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.538044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.543334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.543366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.543384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.548766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.548798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.548816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.553636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.553667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.553685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.558611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.558642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.558660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.564073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.564103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.564121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.195 [2024-12-10 04:13:24.569559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.195 [2024-12-10 04:13:24.569591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.195 [2024-12-10 04:13:24.569608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.574919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.574958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.574979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.580015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.580048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.580066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.585584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.585629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.585646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.591692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.591725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.591743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.596911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.596943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.596961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.602896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.602928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.602946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.608746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.608778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.608796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.613945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.613977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.613995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.618409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.618440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.618457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.622836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.622866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.622883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.627480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.627510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.627527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.632940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.632971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.632988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.637916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.637947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.637964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.642625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.642655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.642673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.647371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.647401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.647418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.651877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.651908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.651925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.656753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.656784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.656803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.661372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.661402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.661425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.666110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.666140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.666158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.670645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.670674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.670691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.675978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.457 [2024-12-10 04:13:24.676008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.457 [2024-12-10 04:13:24.676025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.457 [2024-12-10 04:13:24.682541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.682579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.682596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.689741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.689773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.689790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.695485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.695517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.695534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.699782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.699813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.699831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.704443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.704476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.704510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.710465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.710517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.710535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.716318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.716350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.716368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.721611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.721641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.721675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.727450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.727481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.727513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.458 5785.00 IOPS, 723.12 MiB/s [2024-12-10T03:13:24.847Z] [2024-12-10 04:13:24.734206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.734237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.734255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.739144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.739189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.739206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.744682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.744714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.744732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.749688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.749719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.749737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.756329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.756368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.756410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.762291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.762339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.762357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.767481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.767529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.767554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.772153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.772184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.772218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.776866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.776897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.776914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.781531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.781569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.781587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.786266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.786296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.786314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.790829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.790858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.790875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.796263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.796309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.796326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.801409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.801440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.801464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.807219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.807251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.807270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.812110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.812140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.812157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.817604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.817636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.817654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.823951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.823983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.824015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.458 [2024-12-10 04:13:24.829654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.458 [2024-12-10 04:13:24.829686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.458 [2024-12-10 04:13:24.829704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.459 [2024-12-10 04:13:24.834935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.459 [2024-12-10 04:13:24.834982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.459 [2024-12-10 04:13:24.834999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.720 [2024-12-10 04:13:24.840335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.720 [2024-12-10 04:13:24.840368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.720 [2024-12-10 04:13:24.840385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.720 [2024-12-10 04:13:24.844935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.720 [2024-12-10 04:13:24.844966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.720 [2024-12-10 04:13:24.844983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.720 [2024-12-10 04:13:24.849529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.720 [2024-12-10 04:13:24.849567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.720 [2024-12-10 04:13:24.849585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.720 [2024-12-10 04:13:24.854010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.720 [2024-12-10 04:13:24.854039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.720 [2024-12-10 04:13:24.854072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.720 [2024-12-10 04:13:24.858630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.720 [2024-12-10 04:13:24.858667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.720 [2024-12-10 04:13:24.858684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.720 [2024-12-10 04:13:24.863070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.720 [2024-12-10 04:13:24.863099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.720 [2024-12-10 04:13:24.863117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.720 [2024-12-10 04:13:24.868040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.720 [2024-12-10 04:13:24.868070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.720 [2024-12-10 04:13:24.868102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.720 [2024-12-10 04:13:24.873709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.720 [2024-12-10 04:13:24.873742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.720 [2024-12-10 04:13:24.873760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.720 [2024-12-10 04:13:24.880241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.720 [2024-12-10 04:13:24.880279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.720 [2024-12-10 04:13:24.880297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.720 [2024-12-10 04:13:24.885211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.720 [2024-12-10 04:13:24.885243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.720 [2024-12-10 04:13:24.885261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.720 [2024-12-10 04:13:24.889980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.720 [2024-12-10 04:13:24.890012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.720 [2024-12-10 04:13:24.890038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.720 [2024-12-10 04:13:24.895423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.720 [2024-12-10 04:13:24.895454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.720 [2024-12-10 04:13:24.895472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.720 [2024-12-10 04:13:24.901261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.720 [2024-12-10 04:13:24.901292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.720 [2024-12-10 04:13:24.901310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.720 [2024-12-10 04:13:24.905856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.720 [2024-12-10 04:13:24.905886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.720 [2024-12-10 04:13:24.905904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.910584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.910614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.910631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.915439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.915469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.915486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.921108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.921138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.921155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.928589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.928620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.928638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.934884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.934915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.934932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.941314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.941353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.941371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.947148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.947179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.947197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.952750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.952781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.952799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.957989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.958020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.958038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.962552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.962582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.962599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.967170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.967205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.967224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.971734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.971764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.971781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.976379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.976409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.976427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.980922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.980951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.980968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.985626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.985654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.985671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.990314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.990343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.990360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:24.995778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:24.995809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:24.995827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:25.002460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:25.002493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:25.002511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:25.009856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:25.009912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:25.009940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:25.016224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:25.016257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:25.016276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:25.022240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:25.022271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:25.022290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:25.028244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:25.028276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:25.028295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:25.034466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:25.034498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:25.034524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:25.041128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:25.041160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:25.041178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:25.047179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:25.047211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:25.047228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:25.052932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:25.052963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.721 [2024-12-10 04:13:25.052981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.721 [2024-12-10 04:13:25.058632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.721 [2024-12-10 04:13:25.058664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.722 [2024-12-10 04:13:25.058682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.722 [2024-12-10 04:13:25.064332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.722 [2024-12-10 04:13:25.064363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.722 [2024-12-10 04:13:25.064382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.722 [2024-12-10 04:13:25.070540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.722 [2024-12-10 04:13:25.070578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.722 [2024-12-10 04:13:25.070596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.722 [2024-12-10 04:13:25.076852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.722 [2024-12-10 04:13:25.076884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.722 [2024-12-10 04:13:25.076902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.722 [2024-12-10 04:13:25.082984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.722 [2024-12-10 04:13:25.083016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.722 [2024-12-10 04:13:25.083034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.722 [2024-12-10 04:13:25.089081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.722 [2024-12-10 04:13:25.089112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.722 [2024-12-10 04:13:25.089130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.722 [2024-12-10 04:13:25.094886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.722 [2024-12-10 04:13:25.094917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.722 [2024-12-10 04:13:25.094935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.722 [2024-12-10 04:13:25.100428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.722 [2024-12-10 04:13:25.100459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.722 [2024-12-10 04:13:25.100477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.106303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.106335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.106353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.113283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.113315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.113332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.118617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.118648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.118666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.123615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.123646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.123665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.129083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.129114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.129132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.135296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.135328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.135360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.141399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.141431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.141449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.146536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.146576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.146593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.151729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.151761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.151778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.157465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.157496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.157515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.163399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.163430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.163448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.168660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.168692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.168710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.174292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.174323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.174340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.180217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.180249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.180267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.185625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.185665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.185684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.191285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.191317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.191335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.197655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.197686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.197704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.202430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.202461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.202479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.206361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.981 [2024-12-10 04:13:25.206406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.981 [2024-12-10 04:13:25.206423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.981 [2024-12-10 04:13:25.211051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.211081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.211112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.215659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.215703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.215720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.220218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.220248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.220280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.225486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.225515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.225555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.232004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.232048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.232066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.239377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.239423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.239441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.244730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.244761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.244779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.250230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.250259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.250291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.255385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.255416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.255449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.261443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.261499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.261527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.265999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.266032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.266050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.270536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.270576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.270594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.275192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.275236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.275261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.279916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.279963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.279980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.284512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.284576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.284609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.289269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.289299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.289331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.293886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.293932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.293950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.299357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.299389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.299407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.304106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.304151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.304169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.308726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.308768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.308786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.313439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.313484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.313502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.317997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.318034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.318052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.323390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.323420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.323437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.330225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.330272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.330289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.337699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.337732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.337750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.344710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.344742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.344760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.350193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.982 [2024-12-10 04:13:25.350225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.982 [2024-12-10 04:13:25.350243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.982 [2024-12-10 04:13:25.356296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:30.983 [2024-12-10 04:13:25.356328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.983 [2024-12-10 04:13:25.356347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.243 [2024-12-10 04:13:25.363402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.243 [2024-12-10 04:13:25.363435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.243 [2024-12-10 04:13:25.363454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.243 [2024-12-10 04:13:25.370235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.243 [2024-12-10 04:13:25.370268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.243 [2024-12-10 04:13:25.370295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.243 [2024-12-10 04:13:25.377695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.243 [2024-12-10 04:13:25.377728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.243 [2024-12-10 04:13:25.377746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.243 [2024-12-10 04:13:25.385297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.243 [2024-12-10 04:13:25.385329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.243 [2024-12-10 04:13:25.385347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.243 [2024-12-10 04:13:25.392893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.243 [2024-12-10 04:13:25.392926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.243 [2024-12-10 04:13:25.392944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.243 [2024-12-10 04:13:25.400741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.243 [2024-12-10 04:13:25.400773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.243 [2024-12-10 04:13:25.400800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.243 [2024-12-10 04:13:25.409109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.243 [2024-12-10 04:13:25.409141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.243 [2024-12-10 04:13:25.409160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.243 [2024-12-10 04:13:25.416092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.243 [2024-12-10 04:13:25.416124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.243 [2024-12-10 04:13:25.416143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.243 [2024-12-10 04:13:25.424115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.243 [2024-12-10 04:13:25.424147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.243 [2024-12-10 04:13:25.424165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.243 [2024-12-10 04:13:25.431071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.243 [2024-12-10 04:13:25.431103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.243 [2024-12-10 04:13:25.431122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.243 [2024-12-10 04:13:25.438823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.243 [2024-12-10 04:13:25.438862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.243 [2024-12-10 04:13:25.438881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.243 [2024-12-10 04:13:25.446621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.243 [2024-12-10 04:13:25.446652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.243 [2024-12-10 04:13:25.446670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.243 [2024-12-10 04:13:25.453617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.243 [2024-12-10 04:13:25.453649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.453666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.460018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.460050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.460067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.465726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.465757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.465775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.471824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.471855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.471873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.477293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.477325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.477343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.483426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.483457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.483475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.489368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.489398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.489416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.494624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.494656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.494674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.499923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.499954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.499971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.505484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.505515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.505533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.512001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.512035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.512053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.517516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.517554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.517575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.522540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.522579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.522597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.528103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.528135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.528153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.534150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.534181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.534200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.541017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.541049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.541076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.546426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.546457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.546475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.551282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.551314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.551332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.556461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.556492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.556509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.561858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.561889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.561906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.567670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.567702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.567719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.571584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.571616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.571634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.577338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.577369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.577388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.584658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.584691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.584709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.592255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.592294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.592327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.599982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.600014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.600046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.607789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.607836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.607854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.615861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.244 [2024-12-10 04:13:25.615892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.244 [2024-12-10 04:13:25.615910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.244 [2024-12-10 04:13:25.624016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.245 [2024-12-10 04:13:25.624047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.245 [2024-12-10 04:13:25.624081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.504 [2024-12-10 04:13:25.631723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.504 [2024-12-10 04:13:25.631756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.504 [2024-12-10 04:13:25.631774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.504 [2024-12-10 04:13:25.639325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.504 [2024-12-10 04:13:25.639355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.504 [2024-12-10 04:13:25.639387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.504 [2024-12-10 04:13:25.646896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.504 [2024-12-10 04:13:25.646928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.504 [2024-12-10 04:13:25.646946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.504 [2024-12-10 04:13:25.654445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.504 [2024-12-10 04:13:25.654493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.504 [2024-12-10 04:13:25.654510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.504 [2024-12-10 04:13:25.662122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.504 [2024-12-10 04:13:25.662168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.504 [2024-12-10 04:13:25.662185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.504 [2024-12-10 04:13:25.669745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.504 [2024-12-10 04:13:25.669778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.504 [2024-12-10 04:13:25.669796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.504 [2024-12-10 04:13:25.677381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.504 [2024-12-10 04:13:25.677428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.504 [2024-12-10 04:13:25.677446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.504 [2024-12-10 04:13:25.685042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.504 [2024-12-10 04:13:25.685075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.504 [2024-12-10 04:13:25.685093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.504 [2024-12-10 04:13:25.692629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.504 [2024-12-10 04:13:25.692661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.504 [2024-12-10 04:13:25.692679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.504 [2024-12-10 04:13:25.700079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.504 [2024-12-10 04:13:25.700125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.504 [2024-12-10 04:13:25.700142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.504 [2024-12-10 04:13:25.707752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.504 [2024-12-10 04:13:25.707784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.504 [2024-12-10 04:13:25.707802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.504 [2024-12-10 04:13:25.715088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.504 [2024-12-10 04:13:25.715120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.504 [2024-12-10 04:13:25.715138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.505 [2024-12-10 04:13:25.723329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.505 [2024-12-10 04:13:25.723362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.505 [2024-12-10 04:13:25.723389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.505 [2024-12-10 04:13:25.728805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.505 [2024-12-10 04:13:25.728836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.505 [2024-12-10 04:13:25.728853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.505 [2024-12-10 04:13:25.734006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d69d0) 00:25:31.505 [2024-12-10 04:13:25.734037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.505 [2024-12-10 04:13:25.734055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.505 5538.00 IOPS, 692.25 MiB/s 00:25:31.505 Latency(us) 00:25:31.505 [2024-12-10T03:13:25.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.505 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:31.505 nvme0n1 : 2.00 5537.00 692.12 0.00 0.00 2885.30 655.36 12281.93 00:25:31.505 [2024-12-10T03:13:25.894Z] =================================================================================================================== 00:25:31.505 [2024-12-10T03:13:25.894Z] Total : 5537.00 692.12 0.00 0.00 2885.30 655.36 12281.93 00:25:31.505 { 00:25:31.505 "results": [ 00:25:31.505 { 00:25:31.505 "job": "nvme0n1", 00:25:31.505 "core_mask": "0x2", 00:25:31.505 "workload": "randread", 00:25:31.505 "status": "finished", 00:25:31.505 "queue_depth": 16, 00:25:31.505 "io_size": 131072, 00:25:31.505 "runtime": 2.003252, 00:25:31.505 "iops": 5536.996843133065, 00:25:31.505 "mibps": 692.1246053916332, 00:25:31.505 "io_failed": 0, 00:25:31.505 "io_timeout": 0, 00:25:31.505 "avg_latency_us": 2885.3044666159126, 00:25:31.505 "min_latency_us": 655.36, 00:25:31.505 "max_latency_us": 12281.931851851852 00:25:31.505 } 00:25:31.505 ], 00:25:31.505 "core_count": 1 00:25:31.505 } 00:25:31.505 04:13:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:31.505 04:13:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:31.505 04:13:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:31.505 04:13:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:31.505 | .driver_specific 00:25:31.505 | .nvme_error 00:25:31.505 | .status_code 00:25:31.505 | .command_transient_transport_error' 00:25:31.764 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 358 > 0 )) 00:25:31.764 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2498475 00:25:31.764 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2498475 ']' 00:25:31.764 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2498475 00:25:31.764 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:31.764 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:31.764 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2498475 00:25:31.764 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:31.764 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:31.764 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2498475' 00:25:31.764 killing process with pid 2498475 00:25:31.764 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2498475 00:25:31.764 Received shutdown signal, test time was about 2.000000 seconds 00:25:31.764 00:25:31.764 Latency(us) 00:25:31.764 [2024-12-10T03:13:26.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.764 [2024-12-10T03:13:26.153Z] =================================================================================================================== 00:25:31.764 [2024-12-10T03:13:26.153Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:31.764 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2498475 00:25:32.022 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:32.022 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:32.022 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:32.022 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:32.022 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:32.022 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2498886 00:25:32.022 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:32.022 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2498886 /var/tmp/bperf.sock 00:25:32.022 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2498886 ']' 00:25:32.022 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:32.022 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.022 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:32.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:32.022 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.022 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:32.022 [2024-12-10 04:13:26.345398] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:32.022 [2024-12-10 04:13:26.345474] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498886 ] 00:25:32.281 [2024-12-10 04:13:26.415168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.281 [2024-12-10 04:13:26.472249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.281 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.281 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:32.281 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:32.281 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:32.539 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:32.539 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.539 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:32.539 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.539 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:32.539 04:13:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:33.104 nvme0n1 00:25:33.104 04:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:33.104 04:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.104 04:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:33.104 04:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.104 04:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:33.104 04:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:33.104 Running I/O for 2 seconds... 00:25:33.104 [2024-12-10 04:13:27.448115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.104 [2024-12-10 04:13:27.448365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-10 04:13:27.448405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.104 [2024-12-10 04:13:27.461638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.104 [2024-12-10 04:13:27.461938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-10 04:13:27.461977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.105 [2024-12-10 04:13:27.475258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.105 [2024-12-10 04:13:27.475501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.105 [2024-12-10 04:13:27.475530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.363 [2024-12-10 04:13:27.489036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.363 [2024-12-10 04:13:27.489295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.363 [2024-12-10 04:13:27.489324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.363 [2024-12-10 04:13:27.502599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.363 [2024-12-10 04:13:27.502788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.363 [2024-12-10 04:13:27.502818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.363 [2024-12-10 04:13:27.515909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.363 [2024-12-10 04:13:27.516206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.363 [2024-12-10 04:13:27.516244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.363 [2024-12-10 04:13:27.529359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.363 [2024-12-10 04:13:27.529584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.363 [2024-12-10 04:13:27.529612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.363 [2024-12-10 04:13:27.542500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.363 [2024-12-10 04:13:27.542834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.363 [2024-12-10 04:13:27.542863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.363 [2024-12-10 04:13:27.555724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.363 [2024-12-10 04:13:27.555974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.363 [2024-12-10 04:13:27.556001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.364 [2024-12-10 04:13:27.568974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.364 [2024-12-10 04:13:27.569277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.364 [2024-12-10 04:13:27.569321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.364 [2024-12-10 04:13:27.582518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.364 [2024-12-10 04:13:27.582782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.364 [2024-12-10 04:13:27.582810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.364 [2024-12-10 04:13:27.595642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.364 [2024-12-10 04:13:27.595854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.364 [2024-12-10 04:13:27.595883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.364 [2024-12-10 04:13:27.608775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.364 [2024-12-10 04:13:27.608993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.364 [2024-12-10 04:13:27.609034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.364 [2024-12-10 04:13:27.622014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.364 [2024-12-10 04:13:27.622243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.364 [2024-12-10 04:13:27.622271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.364 [2024-12-10 04:13:27.635046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.364 [2024-12-10 04:13:27.635269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.364 [2024-12-10 04:13:27.635296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.364 [2024-12-10 04:13:27.648358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.364 [2024-12-10 04:13:27.648597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.364 [2024-12-10 04:13:27.648626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.364 [2024-12-10 04:13:27.661872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.364 [2024-12-10 04:13:27.662185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.364 [2024-12-10 04:13:27.662213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.364 [2024-12-10 04:13:27.675256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.364 [2024-12-10 04:13:27.675478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.364 [2024-12-10 04:13:27.675505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.364 [2024-12-10 04:13:27.688561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.364 [2024-12-10 04:13:27.688914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.364 [2024-12-10 04:13:27.688941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.364 [2024-12-10 04:13:27.701808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.364 [2024-12-10 04:13:27.702023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.364 [2024-12-10 04:13:27.702058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.364 [2024-12-10 04:13:27.715078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.364 [2024-12-10 04:13:27.715344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.364 [2024-12-10 04:13:27.715375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.364 [2024-12-10 04:13:27.728434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.364 [2024-12-10 04:13:27.728686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.364 [2024-12-10 04:13:27.728716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.364 [2024-12-10 04:13:27.741753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.364 [2024-12-10 04:13:27.741951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.364 [2024-12-10 04:13:27.741992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.755408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.755714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.640 [2024-12-10 04:13:27.755743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.769313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.769537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.640 [2024-12-10 04:13:27.769575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.782441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.782680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.640 [2024-12-10 04:13:27.782707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.795706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.795932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.640 [2024-12-10 04:13:27.795961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.809022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.809266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.640 [2024-12-10 04:13:27.809294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.822322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.822552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.640 [2024-12-10 04:13:27.822579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.835569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.835776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.640 [2024-12-10 04:13:27.835803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.848723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.848971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.640 [2024-12-10 04:13:27.848999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.861788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.862004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.640 [2024-12-10 04:13:27.862051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.874992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.875198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.640 [2024-12-10 04:13:27.875226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.888092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.888316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.640 [2024-12-10 04:13:27.888343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.901309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.901528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.640 [2024-12-10 04:13:27.901565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.914447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.914682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.640 [2024-12-10 04:13:27.914710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.927678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.927979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.640 [2024-12-10 04:13:27.928006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.940832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.941089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.640 [2024-12-10 04:13:27.941118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.954062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.954350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.640 [2024-12-10 04:13:27.954386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.640 [2024-12-10 04:13:27.967523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.640 [2024-12-10 04:13:27.967749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.641 [2024-12-10 04:13:27.967779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.641 [2024-12-10 04:13:27.980769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.641 [2024-12-10 04:13:27.980990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.641 [2024-12-10 04:13:27.981018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.641 [2024-12-10 04:13:27.993953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.641 [2024-12-10 04:13:27.994212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.641 [2024-12-10 04:13:27.994240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.641 [2024-12-10 04:13:28.007350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.641 [2024-12-10 04:13:28.007594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.641 [2024-12-10 04:13:28.007622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.908 [2024-12-10 04:13:28.020887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.908 [2024-12-10 04:13:28.021129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.908 [2024-12-10 04:13:28.021157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.908 [2024-12-10 04:13:28.034519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.908 [2024-12-10 04:13:28.034790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.908 [2024-12-10 04:13:28.034818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.908 [2024-12-10 04:13:28.047883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.908 [2024-12-10 04:13:28.048108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.908 [2024-12-10 04:13:28.048135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.908 [2024-12-10 04:13:28.061019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.908 [2024-12-10 04:13:28.061249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.908 [2024-12-10 04:13:28.061277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.908 [2024-12-10 04:13:28.074175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.908 [2024-12-10 04:13:28.074405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.908 [2024-12-10 04:13:28.074431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.908 [2024-12-10 04:13:28.087371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.908 [2024-12-10 04:13:28.087684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.908 [2024-12-10 04:13:28.087712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.908 [2024-12-10 04:13:28.100504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.908 [2024-12-10 04:13:28.100753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.909 [2024-12-10 04:13:28.100782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.909 [2024-12-10 04:13:28.113743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.909 [2024-12-10 04:13:28.113925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.909 [2024-12-10 04:13:28.113953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.909 [2024-12-10 04:13:28.126905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.909 [2024-12-10 04:13:28.127135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.909 [2024-12-10 04:13:28.127162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.909 [2024-12-10 04:13:28.139965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.909 [2024-12-10 04:13:28.140189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.909 [2024-12-10 04:13:28.140215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.909 [2024-12-10 04:13:28.153279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.909 [2024-12-10 04:13:28.153503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.909 [2024-12-10 04:13:28.153531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.909 [2024-12-10 04:13:28.166377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.909 [2024-12-10 04:13:28.166580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.909 [2024-12-10 04:13:28.166607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.909 [2024-12-10 04:13:28.179478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.909 [2024-12-10 04:13:28.179712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.909 [2024-12-10 04:13:28.179739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.909 [2024-12-10 04:13:28.192455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.909 [2024-12-10 04:13:28.192722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.909 [2024-12-10 04:13:28.192750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.909 [2024-12-10 04:13:28.205566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.909 [2024-12-10 04:13:28.205897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.909 [2024-12-10 04:13:28.205940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.909 [2024-12-10 04:13:28.219010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.909 [2024-12-10 04:13:28.219330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.909 [2024-12-10 04:13:28.219375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.909 [2024-12-10 04:13:28.232269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.909 [2024-12-10 04:13:28.232558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.909 [2024-12-10 04:13:28.232587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.909 [2024-12-10 04:13:28.245406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.909 [2024-12-10 04:13:28.245718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.909 [2024-12-10 04:13:28.245746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.909 [2024-12-10 04:13:28.258450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.909 [2024-12-10 04:13:28.258715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.909 [2024-12-10 04:13:28.258743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.909 [2024-12-10 04:13:28.271447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.909 [2024-12-10 04:13:28.271682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.909 [2024-12-10 04:13:28.271709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.909 [2024-12-10 04:13:28.284589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:33.909 [2024-12-10 04:13:28.284839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.909 [2024-12-10 04:13:28.284867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.298036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.298264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.298292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.311164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.311479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.311523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.324428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.324680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.324709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.337572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.337781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.337808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.350692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.350905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.350931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.363865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.364160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.364186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.376998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.377258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.377285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.390166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.390385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.390412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.403269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.403559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.403603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.416657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.416930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.416957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.429820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.430066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.430093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 19128.00 IOPS, 74.72 MiB/s [2024-12-10T03:13:28.557Z] [2024-12-10 04:13:28.442929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.443157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.443184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.456056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.456311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.456346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.469614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.469833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.469879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.482678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.482888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.482915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.496262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.496488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.496515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.509393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.509626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.509654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.522517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.168 [2024-12-10 04:13:28.522760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.168 [2024-12-10 04:13:28.522787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.168 [2024-12-10 04:13:28.535647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.169 [2024-12-10 04:13:28.535866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.169 [2024-12-10 04:13:28.535893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.169 [2024-12-10 04:13:28.548855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.169 [2024-12-10 04:13:28.549178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.169 [2024-12-10 04:13:28.549205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.427 [2024-12-10 04:13:28.562126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.427 [2024-12-10 04:13:28.562335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.427 [2024-12-10 04:13:28.562362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.427 [2024-12-10 04:13:28.575573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.427 [2024-12-10 04:13:28.575759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.427 [2024-12-10 04:13:28.575788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.427 [2024-12-10 04:13:28.588702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.427 [2024-12-10 04:13:28.588995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.427 [2024-12-10 04:13:28.589023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.428 [2024-12-10 04:13:28.602046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.428 [2024-12-10 04:13:28.602273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.428 [2024-12-10 04:13:28.602301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.428 [2024-12-10 04:13:28.615247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.428 [2024-12-10 04:13:28.615503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.428 [2024-12-10 04:13:28.615554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.428 [2024-12-10 04:13:28.628405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.428 [2024-12-10 04:13:28.628612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.428 [2024-12-10 04:13:28.628640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.428 [2024-12-10 04:13:28.641453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.428 [2024-12-10 04:13:28.641697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.428 [2024-12-10 04:13:28.641724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.428 [2024-12-10 04:13:28.654621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.428 [2024-12-10 04:13:28.654830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.428 [2024-12-10 04:13:28.654857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.428 [2024-12-10 04:13:28.667769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.428 [2024-12-10 04:13:28.667960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.428 [2024-12-10 04:13:28.668000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.428 [2024-12-10 04:13:28.680910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.428 [2024-12-10 04:13:28.681136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.428 [2024-12-10 04:13:28.681164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.428 [2024-12-10 04:13:28.694041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.428 [2024-12-10 04:13:28.694271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.428 [2024-12-10 04:13:28.694298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.428 [2024-12-10 04:13:28.707186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.428 [2024-12-10 04:13:28.707437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.428 [2024-12-10 04:13:28.707474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.428 [2024-12-10 04:13:28.720651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.428 [2024-12-10 04:13:28.720865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.428 [2024-12-10 04:13:28.720909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.428 [2024-12-10 04:13:28.733845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.428 [2024-12-10 04:13:28.734193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.428 [2024-12-10 04:13:28.734222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.428 [2024-12-10 04:13:28.747230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.428 [2024-12-10 04:13:28.747486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.428 [2024-12-10 04:13:28.747514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.428 [2024-12-10 04:13:28.760351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.428 [2024-12-10 04:13:28.760581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.428 [2024-12-10 04:13:28.760609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.428 [2024-12-10 04:13:28.773577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.428 [2024-12-10 04:13:28.773831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.428 [2024-12-10 04:13:28.773871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.428 [2024-12-10 04:13:28.786915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.428 [2024-12-10 04:13:28.787141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.428 [2024-12-10 04:13:28.787169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.428 [2024-12-10 04:13:28.800327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.428 [2024-12-10 04:13:28.800556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.428 [2024-12-10 04:13:28.800584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:28.813810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:28.814037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:28.814065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:28.826691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:28.826889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:28.826916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:28.839860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:28.840133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:28.840161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:28.853073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:28.853313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:28.853341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:28.866491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:28.866761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:28.866789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:28.879760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:28.880039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:28.880081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:28.893079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:28.893393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:28.893420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:28.906184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:28.906415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:28.906443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:28.919310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:28.919537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:28.919572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:28.932471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:28.932688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:28.932716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:28.945612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:28.945848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:28.945875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:28.958758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:28.958993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:28.959029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:28.972008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:28.972321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:28.972351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:28.985069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:28.985292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:28.985319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:28.998287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:28.998507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:28.998534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:29.011459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:29.011703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:29.011732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:29.024711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:29.024959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:29.024985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:29.037678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:29.038003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:29.038031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:29.050870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:29.051093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:29.051120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.687 [2024-12-10 04:13:29.064029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.687 [2024-12-10 04:13:29.064258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.687 [2024-12-10 04:13:29.064286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.077327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.077559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.946 [2024-12-10 04:13:29.077586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.090420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.090670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.946 [2024-12-10 04:13:29.090699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.103584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.103795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.946 [2024-12-10 04:13:29.103822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.116771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.117016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.946 [2024-12-10 04:13:29.117049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.130007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.130238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.946 [2024-12-10 04:13:29.130266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.143220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.143559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.946 [2024-12-10 04:13:29.143588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.156374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.156608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.946 [2024-12-10 04:13:29.156636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.169455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.169694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.946 [2024-12-10 04:13:29.169722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.182501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.182799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.946 [2024-12-10 04:13:29.182827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.195743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.195936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.946 [2024-12-10 04:13:29.195963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.209069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.209359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.946 [2024-12-10 04:13:29.209395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.222580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.222764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.946 [2024-12-10 04:13:29.222795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.235712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.235927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.946 [2024-12-10 04:13:29.235954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.248786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.248978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.946 [2024-12-10 04:13:29.249004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.261872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.262133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.946 [2024-12-10 04:13:29.262159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.274984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.275207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.946 [2024-12-10 04:13:29.275234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.946 [2024-12-10 04:13:29.288064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.946 [2024-12-10 04:13:29.288364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.947 [2024-12-10 04:13:29.288392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.947 [2024-12-10 04:13:29.301207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.947 [2024-12-10 04:13:29.301416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.947 [2024-12-10 04:13:29.301459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.947 [2024-12-10 04:13:29.314346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.947 [2024-12-10 04:13:29.314578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.947 [2024-12-10 04:13:29.314621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:34.947 [2024-12-10 04:13:29.327625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:34.947 [2024-12-10 04:13:29.327917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.947 [2024-12-10 04:13:29.327944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:35.241 [2024-12-10 04:13:29.340966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:35.241 [2024-12-10 04:13:29.341188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-12-10 04:13:29.341215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:35.241 [2024-12-10 04:13:29.354126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:35.241 [2024-12-10 04:13:29.354448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-12-10 04:13:29.354475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:35.241 [2024-12-10 04:13:29.367460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:35.241 [2024-12-10 04:13:29.367803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-12-10 04:13:29.367832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:35.241 [2024-12-10 04:13:29.380647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:35.241 [2024-12-10 04:13:29.380836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-12-10 04:13:29.380864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:35.241 [2024-12-10 04:13:29.393565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:35.241 [2024-12-10 04:13:29.393810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-12-10 04:13:29.393853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:35.241 [2024-12-10 04:13:29.406709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:35.241 [2024-12-10 04:13:29.407012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-12-10 04:13:29.407056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:35.241 [2024-12-10 04:13:29.419972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:35.241 [2024-12-10 04:13:29.420234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-12-10 04:13:29.420276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:35.241 [2024-12-10 04:13:29.433121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc61e30) with pdu=0x200016eff3c8 00:25:35.241 [2024-12-10 04:13:29.433343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.241 [2024-12-10 04:13:29.433370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:35.241 19246.00 IOPS, 75.18 MiB/s 00:25:35.241 Latency(us) 00:25:35.241 [2024-12-10T03:13:29.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.241 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:35.241 nvme0n1 : 2.01 19246.89 75.18 0.00 0.00 6634.97 5072.97 14175.19 00:25:35.241 [2024-12-10T03:13:29.630Z] =================================================================================================================== 00:25:35.241 [2024-12-10T03:13:29.630Z] Total : 19246.89 75.18 0.00 0.00 6634.97 5072.97 14175.19 00:25:35.241 { 00:25:35.241 "results": [ 00:25:35.241 { 00:25:35.241 "job": "nvme0n1", 00:25:35.241 "core_mask": "0x2", 00:25:35.241 "workload": "randwrite", 00:25:35.241 "status": "finished", 00:25:35.241 "queue_depth": 128, 00:25:35.241 "io_size": 4096, 00:25:35.241 "runtime": 2.008221, 00:25:35.241 "iops": 19246.885676427046, 00:25:35.241 "mibps": 75.18314717354315, 00:25:35.241 "io_failed": 0, 00:25:35.241 "io_timeout": 0, 00:25:35.241 "avg_latency_us": 6634.9717755010515, 00:25:35.241 "min_latency_us": 5072.971851851852, 00:25:35.241 "max_latency_us": 14175.194074074074 00:25:35.241 } 00:25:35.241 ], 00:25:35.241 "core_count": 1 00:25:35.241 } 00:25:35.241 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:35.241 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:35.241 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:35.241 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:35.241 | .driver_specific 00:25:35.241 | .nvme_error 00:25:35.241 | .status_code 00:25:35.241 | .command_transient_transport_error' 00:25:35.500 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 151 > 0 )) 00:25:35.500 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2498886 00:25:35.500 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2498886 ']' 00:25:35.500 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2498886 00:25:35.500 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:35.500 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.500 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2498886 00:25:35.500 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:35.500 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:35.500 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2498886' 00:25:35.500 killing process with pid 2498886 00:25:35.500 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2498886 00:25:35.500 Received shutdown signal, test time was about 2.000000 seconds 00:25:35.500 00:25:35.500 Latency(us) 00:25:35.500 [2024-12-10T03:13:29.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.500 [2024-12-10T03:13:29.889Z] =================================================================================================================== 00:25:35.500 [2024-12-10T03:13:29.889Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:35.500 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2498886 00:25:35.759 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:35.759 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:35.759 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:35.759 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:35.759 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:35.759 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2499420 00:25:35.759 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:35.759 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2499420 /var/tmp/bperf.sock 00:25:35.759 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2499420 ']' 00:25:35.759 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:35.759 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:35.759 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:35.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:35.759 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:35.759 04:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:35.759 [2024-12-10 04:13:30.025539] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:35.759 [2024-12-10 04:13:30.025669] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499420 ] 00:25:35.759 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:35.759 Zero copy mechanism will not be used. 00:25:35.759 [2024-12-10 04:13:30.095088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.017 [2024-12-10 04:13:30.152619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.017 04:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:36.017 04:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:36.017 04:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:36.017 04:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:36.275 04:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:36.275 04:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.275 04:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:36.275 04:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.275 04:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:36.275 04:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:36.841 nvme0n1 00:25:36.841 04:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:36.841 04:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.841 04:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:36.841 04:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.841 04:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:36.841 04:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:36.841 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:36.841 Zero copy mechanism will not be used. 00:25:36.841 Running I/O for 2 seconds... 00:25:36.841 [2024-12-10 04:13:31.050346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.841 [2024-12-10 04:13:31.050466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.841 [2024-12-10 04:13:31.050525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.056296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.056432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.056470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.061831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.062033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.062064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.068385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.068579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.068609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.074382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.074522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.074566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.081474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.081611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.081642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.087813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.087958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.087993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.093362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.093520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.093563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.099334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.099459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.099496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.104880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.105033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.105073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.110172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.110312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.110343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.116649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.116787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.116817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.122416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.122525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.122582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.129024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.129166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.129196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.135034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.135161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.135194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.140248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.140396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.140432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.145406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.145566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.145600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.150598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.150719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.150750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.155816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.155960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.155992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.161279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.161363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.161394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.167453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.167611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.167641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.174593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.174711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.174742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.181157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.181264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.181296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.187773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.187947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.187977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.195084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.195213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.195243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.201972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.202083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.202122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.207753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.207889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.207931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.213568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.213698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.842 [2024-12-10 04:13:31.213732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.842 [2024-12-10 04:13:31.220116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:36.842 [2024-12-10 04:13:31.220205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.843 [2024-12-10 04:13:31.220234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.103 [2024-12-10 04:13:31.226085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.103 [2024-12-10 04:13:31.226168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.103 [2024-12-10 04:13:31.226206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.103 [2024-12-10 04:13:31.231510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.103 [2024-12-10 04:13:31.231619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.103 [2024-12-10 04:13:31.231658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.103 [2024-12-10 04:13:31.236887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.103 [2024-12-10 04:13:31.236980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.103 [2024-12-10 04:13:31.237017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.103 [2024-12-10 04:13:31.242513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.103 [2024-12-10 04:13:31.242599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.103 [2024-12-10 04:13:31.242635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.103 [2024-12-10 04:13:31.248692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.103 [2024-12-10 04:13:31.248777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.103 [2024-12-10 04:13:31.248813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.103 [2024-12-10 04:13:31.254570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.103 [2024-12-10 04:13:31.254648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.103 [2024-12-10 04:13:31.254687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.103 [2024-12-10 04:13:31.260435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.103 [2024-12-10 04:13:31.260532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.103 [2024-12-10 04:13:31.260574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.103 [2024-12-10 04:13:31.266481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.103 [2024-12-10 04:13:31.266575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.266608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.272349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.272433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.272462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.277891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.277967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.278003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.284055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.284138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.284169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.289381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.289515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.289558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.295431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.295603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.295634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.301872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.302011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.302043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.308708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.308817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.308859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.315831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.315932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.315973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.323007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.323082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.323111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.330019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.330149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.330181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.337166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.337245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.337274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.342647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.342959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.342995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.348025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.348341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.348372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.353400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.353701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.353732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.358097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.358369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.358406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.362553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.362792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.362834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.366950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.367219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.367253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.371444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.371684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.371723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.376041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.376283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.376313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.381599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.381825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.381856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.386301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.386507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.386538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.391167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.391392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.391428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.396029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.396255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.396290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.400956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.401186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.401225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.405680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.405916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.405947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.410623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.410837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.410868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.415348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.415657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.104 [2024-12-10 04:13:31.415688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.104 [2024-12-10 04:13:31.420220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.104 [2024-12-10 04:13:31.420486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.105 [2024-12-10 04:13:31.420522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.105 [2024-12-10 04:13:31.424892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.105 [2024-12-10 04:13:31.425125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.105 [2024-12-10 04:13:31.425160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.105 [2024-12-10 04:13:31.429871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.105 [2024-12-10 04:13:31.430129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.105 [2024-12-10 04:13:31.430160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.105 [2024-12-10 04:13:31.435432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.105 [2024-12-10 04:13:31.435717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.105 [2024-12-10 04:13:31.435747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.105 [2024-12-10 04:13:31.440133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.105 [2024-12-10 04:13:31.440375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.105 [2024-12-10 04:13:31.440413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.105 [2024-12-10 04:13:31.444636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.105 [2024-12-10 04:13:31.444835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.105 [2024-12-10 04:13:31.444866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.105 [2024-12-10 04:13:31.448996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.105 [2024-12-10 04:13:31.449269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.105 [2024-12-10 04:13:31.449308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.105 [2024-12-10 04:13:31.453388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.105 [2024-12-10 04:13:31.453650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.105 [2024-12-10 04:13:31.453680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.105 [2024-12-10 04:13:31.457886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.105 [2024-12-10 04:13:31.458122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.105 [2024-12-10 04:13:31.458153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.105 [2024-12-10 04:13:31.462369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.105 [2024-12-10 04:13:31.462646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.105 [2024-12-10 04:13:31.462678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.105 [2024-12-10 04:13:31.467174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.105 [2024-12-10 04:13:31.467404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.105 [2024-12-10 04:13:31.467439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.105 [2024-12-10 04:13:31.472108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.105 [2024-12-10 04:13:31.472352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.105 [2024-12-10 04:13:31.472385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.105 [2024-12-10 04:13:31.477293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.105 [2024-12-10 04:13:31.477498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.105 [2024-12-10 04:13:31.477532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.105 [2024-12-10 04:13:31.482494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.105 [2024-12-10 04:13:31.482741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.105 [2024-12-10 04:13:31.482782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.487746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.488003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.488048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.492743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.492965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.493005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.497816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.498049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.498088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.502975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.503210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.503244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.508015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.508237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.508276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.512989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.513240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.513271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.518041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.518316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.518347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.523279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.523518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.523567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.528281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.528508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.528553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.533233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.533502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.533543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.537678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.537916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.537955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.541896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.542104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.542140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.546172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.546404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.546438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.550385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.550638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.550674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.554659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.554837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.554875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.559005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.559284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.559317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.563362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.563646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.563687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.567890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.568094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.568135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.572884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.573111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.573149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.577606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.577776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.577807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.582688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.367 [2024-12-10 04:13:31.582885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.367 [2024-12-10 04:13:31.582924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.367 [2024-12-10 04:13:31.586968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.587150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.587189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.591337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.591555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.591602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.595616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.595803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.595837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.599915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.600143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.600183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.604341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.604592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.604633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.608699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.608927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.608962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.613111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.613335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.613369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.617440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.617658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.617694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.621725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.621923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.621956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.626013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.626221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.626260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.630266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.630489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.630528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.634881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.635090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.635130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.639295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.639557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.639593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.643601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.643789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.643819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.648786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.648963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.649003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.653204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.653406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.653443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.657612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.657822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.657860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.661852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.662062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.662100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.666121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.666344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.666376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.670578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.670777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.670809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.674907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.675133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.675170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.679120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.679322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.679359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.683377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.683584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.683621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.687573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.687788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.687823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.691736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.691963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.692001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.695979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.696210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.696246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.700228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.700433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.700470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.704462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.368 [2024-12-10 04:13:31.704690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.368 [2024-12-10 04:13:31.704726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.368 [2024-12-10 04:13:31.708669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.369 [2024-12-10 04:13:31.708862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.369 [2024-12-10 04:13:31.708898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.369 [2024-12-10 04:13:31.712827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.369 [2024-12-10 04:13:31.713042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.369 [2024-12-10 04:13:31.713079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.369 [2024-12-10 04:13:31.717118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.369 [2024-12-10 04:13:31.717306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.369 [2024-12-10 04:13:31.717346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.369 [2024-12-10 04:13:31.721290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.369 [2024-12-10 04:13:31.721488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.369 [2024-12-10 04:13:31.721518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.369 [2024-12-10 04:13:31.725533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.369 [2024-12-10 04:13:31.725733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.369 [2024-12-10 04:13:31.725769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.369 [2024-12-10 04:13:31.729769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.369 [2024-12-10 04:13:31.729984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.369 [2024-12-10 04:13:31.730017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.369 [2024-12-10 04:13:31.734029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.369 [2024-12-10 04:13:31.734260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.369 [2024-12-10 04:13:31.734298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.369 [2024-12-10 04:13:31.738260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.369 [2024-12-10 04:13:31.738479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.369 [2024-12-10 04:13:31.738519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.369 [2024-12-10 04:13:31.742632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.369 [2024-12-10 04:13:31.742839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.369 [2024-12-10 04:13:31.742871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.369 [2024-12-10 04:13:31.746932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.369 [2024-12-10 04:13:31.747153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.369 [2024-12-10 04:13:31.747190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.629 [2024-12-10 04:13:31.751284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.629 [2024-12-10 04:13:31.751491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.629 [2024-12-10 04:13:31.751527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.629 [2024-12-10 04:13:31.755669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.629 [2024-12-10 04:13:31.755868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.629 [2024-12-10 04:13:31.755906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.629 [2024-12-10 04:13:31.759886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.629 [2024-12-10 04:13:31.760085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.629 [2024-12-10 04:13:31.760127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.629 [2024-12-10 04:13:31.764150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.629 [2024-12-10 04:13:31.764336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.629 [2024-12-10 04:13:31.764371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.629 [2024-12-10 04:13:31.768455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.629 [2024-12-10 04:13:31.768687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.629 [2024-12-10 04:13:31.768724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.629 [2024-12-10 04:13:31.772754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.629 [2024-12-10 04:13:31.772942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.629 [2024-12-10 04:13:31.772976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.629 [2024-12-10 04:13:31.777044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.629 [2024-12-10 04:13:31.777241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.629 [2024-12-10 04:13:31.777279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.629 [2024-12-10 04:13:31.781271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.629 [2024-12-10 04:13:31.781493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.629 [2024-12-10 04:13:31.781525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.629 [2024-12-10 04:13:31.785538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.629 [2024-12-10 04:13:31.785809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.629 [2024-12-10 04:13:31.785844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.629 [2024-12-10 04:13:31.789835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.629 [2024-12-10 04:13:31.790043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.629 [2024-12-10 04:13:31.790077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.629 [2024-12-10 04:13:31.794088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.629 [2024-12-10 04:13:31.794285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.629 [2024-12-10 04:13:31.794324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.629 [2024-12-10 04:13:31.798348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.629 [2024-12-10 04:13:31.798555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.629 [2024-12-10 04:13:31.798593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.629 [2024-12-10 04:13:31.802555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.629 [2024-12-10 04:13:31.802780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.629 [2024-12-10 04:13:31.802812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.629 [2024-12-10 04:13:31.806800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.807000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.807038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.810997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.811248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.811286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.815381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.815681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.815721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.819763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.820000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.820040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.824640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.824801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.824841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.829553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.829735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.829767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.834702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.834858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.834898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.840360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.840589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.840620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.845412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.845595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.845631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.850358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.850570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.850602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.855736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.855913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.855945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.861328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.861487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.861526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.866964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.867141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.867175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.871973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.872159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.872191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.877276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.877467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.877496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.882504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.882698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.882740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.887602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.887786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.887828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.892641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.892806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.892841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.897985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.898165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.898205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.902992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.903208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.903239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.908054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.908243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.908273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.913415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.913629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.913679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.918499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.918680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.918710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.923952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.924161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.924213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.929071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.929285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.929321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.934045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.934232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.934270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.939222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.939408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.939443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.630 [2024-12-10 04:13:31.944209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.630 [2024-12-10 04:13:31.944416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.630 [2024-12-10 04:13:31.944451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.631 [2024-12-10 04:13:31.949353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.631 [2024-12-10 04:13:31.949542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.631 [2024-12-10 04:13:31.949594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.631 [2024-12-10 04:13:31.954382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.631 [2024-12-10 04:13:31.954579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.631 [2024-12-10 04:13:31.954617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.631 [2024-12-10 04:13:31.959383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.631 [2024-12-10 04:13:31.959591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.631 [2024-12-10 04:13:31.959641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.631 [2024-12-10 04:13:31.964493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.631 [2024-12-10 04:13:31.964679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.631 [2024-12-10 04:13:31.964711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.631 [2024-12-10 04:13:31.969484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.631 [2024-12-10 04:13:31.969678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.631 [2024-12-10 04:13:31.969720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.631 [2024-12-10 04:13:31.974631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.631 [2024-12-10 04:13:31.974811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.631 [2024-12-10 04:13:31.974851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.631 [2024-12-10 04:13:31.979729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.631 [2024-12-10 04:13:31.979898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.631 [2024-12-10 04:13:31.979931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.631 [2024-12-10 04:13:31.984718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.631 [2024-12-10 04:13:31.984909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.631 [2024-12-10 04:13:31.984948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.631 [2024-12-10 04:13:31.989653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.631 [2024-12-10 04:13:31.989824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.631 [2024-12-10 04:13:31.989858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.631 [2024-12-10 04:13:31.994861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.631 [2024-12-10 04:13:31.995116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.631 [2024-12-10 04:13:31.995153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.631 [2024-12-10 04:13:32.000324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.631 [2024-12-10 04:13:32.000506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.631 [2024-12-10 04:13:32.000536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.631 [2024-12-10 04:13:32.005451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.631 [2024-12-10 04:13:32.005677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.631 [2024-12-10 04:13:32.005717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.631 [2024-12-10 04:13:32.010513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.631 [2024-12-10 04:13:32.010652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.893 [2024-12-10 04:13:32.010692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.893 [2024-12-10 04:13:32.015619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.893 [2024-12-10 04:13:32.015762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.893 [2024-12-10 04:13:32.015799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.893 [2024-12-10 04:13:32.020865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.893 [2024-12-10 04:13:32.021050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.893 [2024-12-10 04:13:32.021084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.893 [2024-12-10 04:13:32.025957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.893 [2024-12-10 04:13:32.026145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.893 [2024-12-10 04:13:32.026175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.893 [2024-12-10 04:13:32.031125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.893 [2024-12-10 04:13:32.031330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.893 [2024-12-10 04:13:32.031361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.893 [2024-12-10 04:13:32.036315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.893 [2024-12-10 04:13:32.036510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.893 [2024-12-10 04:13:32.036556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.893 [2024-12-10 04:13:32.041412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.893 [2024-12-10 04:13:32.041593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.893 [2024-12-10 04:13:32.041624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.893 [2024-12-10 04:13:32.046418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.893 [2024-12-10 04:13:32.046596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.893 [2024-12-10 04:13:32.046632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.893 [2024-12-10 04:13:32.051453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.893 [2024-12-10 04:13:32.053062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.893 [2024-12-10 04:13:32.053095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.893 6137.00 IOPS, 767.12 MiB/s [2024-12-10T03:13:32.282Z] [2024-12-10 04:13:32.057640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.893 [2024-12-10 04:13:32.057728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.893 [2024-12-10 04:13:32.057762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.893 [2024-12-10 04:13:32.061950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.893 [2024-12-10 04:13:32.062058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.893 [2024-12-10 04:13:32.062098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.893 [2024-12-10 04:13:32.066252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.893 [2024-12-10 04:13:32.066347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.893 [2024-12-10 04:13:32.066387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.893 [2024-12-10 04:13:32.070558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.893 [2024-12-10 04:13:32.070662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.893 [2024-12-10 04:13:32.070700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.893 [2024-12-10 04:13:32.074856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.893 [2024-12-10 04:13:32.074964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.893 [2024-12-10 04:13:32.074999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.893 [2024-12-10 04:13:32.079254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.893 [2024-12-10 04:13:32.079344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.893 [2024-12-10 04:13:32.079382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.083622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.083707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.083748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.087887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.087990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.088026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.092311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.092418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.092455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.096646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.096727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.096765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.100972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.101061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.101091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.105524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.105634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.105664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.110572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.110657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.110697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.114905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.115004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.115043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.119451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.119562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.119601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.123865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.123964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.123998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.128350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.128454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.128486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.132713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.132813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.132846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.137096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.137184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.137225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.141501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.141629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.141663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.145835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.145965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.146000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.150185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.150297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.150335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.154661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.154786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.154823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.159510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.159669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.159715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.164963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.165164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.165194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.169459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.169562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.169594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.173892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.894 [2024-12-10 04:13:32.174035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.894 [2024-12-10 04:13:32.174073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.894 [2024-12-10 04:13:32.178579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.178727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.178766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.183025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.183130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.183164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.187895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.187987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.188016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.193314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.193434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.193469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.197694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.197771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.197811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.202042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.202133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.202170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.206387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.206474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.206514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.210704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.210795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.210828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.215172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.215269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.215304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.219361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.219440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.219476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.223674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.223762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.223798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.227957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.228035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.228074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.232109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.232203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.232239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.236332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.236426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.236464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.240557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.240642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.240679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.244853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.244962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.244998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.249242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.249353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.249393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.253457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.253534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.253587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.257758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.257844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.257883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.262112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.262200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.262233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.266442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.266531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.895 [2024-12-10 04:13:32.266579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:37.895 [2024-12-10 04:13:32.270901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:37.895 [2024-12-10 04:13:32.270989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.896 [2024-12-10 04:13:32.271025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.275354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.275461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.275497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.279835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.279955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.279992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.284262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.284353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.284387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.288662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.288749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.288786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.292876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.292973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.293008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.297123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.297204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.297240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.301299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.301386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.301426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.305773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.305902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.305938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.310814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.310958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.310989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.316422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.316631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.316661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.322566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.322732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.322762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.327278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.327427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.327457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.331689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.331867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.331903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.336510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.336631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.336662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.341086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.341235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.341265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.345949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.346130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.346161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.351256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.351455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.156 [2024-12-10 04:13:32.351485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.156 [2024-12-10 04:13:32.356961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.156 [2024-12-10 04:13:32.357182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.357213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.362893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.363049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.363085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.369064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.369297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.369327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.374452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.374628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.374658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.379720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.379844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.379881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.384894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.385068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.385096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.390535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.390754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.390784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.395795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.395977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.396007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.401122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.401281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.401312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.406718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.406900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.406929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.412170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.412360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.412405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.417650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.417806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.417836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.423106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.423264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.423295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.428514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.428744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.428783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.434117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.434317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.434347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.439643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.439791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.439836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.445077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.445247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.445277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.450389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.450583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.450614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.455491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.455658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.455692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.460625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.460781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.460811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.465731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.465868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.465898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.470830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.470956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.470986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.476385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.476535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.476573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.481676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.481882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.481912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.487120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.487335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.487365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.492656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.492819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.492849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.497995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.498137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.498182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.503437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.157 [2024-12-10 04:13:32.503586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.157 [2024-12-10 04:13:32.503617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.157 [2024-12-10 04:13:32.509038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.158 [2024-12-10 04:13:32.509194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.158 [2024-12-10 04:13:32.509224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.158 [2024-12-10 04:13:32.514208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.158 [2024-12-10 04:13:32.514397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.158 [2024-12-10 04:13:32.514427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.158 [2024-12-10 04:13:32.518759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.158 [2024-12-10 04:13:32.518858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.158 [2024-12-10 04:13:32.518896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.158 [2024-12-10 04:13:32.523503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.158 [2024-12-10 04:13:32.523697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.158 [2024-12-10 04:13:32.523728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.158 [2024-12-10 04:13:32.528633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.158 [2024-12-10 04:13:32.528793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.158 [2024-12-10 04:13:32.528823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.158 [2024-12-10 04:13:32.533679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.158 [2024-12-10 04:13:32.533860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.158 [2024-12-10 04:13:32.533890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.538799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.538941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.538971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.543843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.544027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.544056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.548812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.548983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.549013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.553881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.554053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.554082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.558969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.559156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.559185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.564044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.564225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.564262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.569001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.569187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.569217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.574101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.574259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.574289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.579255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.579450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.579480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.584460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.584680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.584711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.590966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.591175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.591204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.595948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.596095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.596128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.600533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.600683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.600718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.605165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.605273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.605305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.609843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.609961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.609997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.614611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.614722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.614758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.618988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.619104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.619142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.623327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.623437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.623475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.628252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.628342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.628370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.632991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.633113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.419 [2024-12-10 04:13:32.633145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.419 [2024-12-10 04:13:32.637390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.419 [2024-12-10 04:13:32.637511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.637553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.641795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.641893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.641929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.646054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.646187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.646221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.650481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.650578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.650615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.654773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.654860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.654893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.659146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.659246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.659285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.663449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.663583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.663617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.667756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.667859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.667895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.672083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.672176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.672210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.676360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.676478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.676512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.681008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.681138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.681175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.685320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.685432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.685479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.689731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.689843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.689877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.694034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.694133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.694169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.698298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.698416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.698448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.702588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.702670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.702707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.706930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.707043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.707080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.711277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.711392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.711427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.715599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.715702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.715734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.719810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.719896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.719938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.724122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.724228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.724265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.728492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.728606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.728642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.732898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.733022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.733062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.737952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.738120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.738150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.743090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.743297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.743328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.749231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.749413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.749442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.754940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.755126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.755155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.760378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.760598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.420 [2024-12-10 04:13:32.760628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.420 [2024-12-10 04:13:32.765797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.420 [2024-12-10 04:13:32.765953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.421 [2024-12-10 04:13:32.765982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.421 [2024-12-10 04:13:32.771230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.421 [2024-12-10 04:13:32.771395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.421 [2024-12-10 04:13:32.771425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.421 [2024-12-10 04:13:32.776732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.421 [2024-12-10 04:13:32.776835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.421 [2024-12-10 04:13:32.776864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.421 [2024-12-10 04:13:32.781949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.421 [2024-12-10 04:13:32.782171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.421 [2024-12-10 04:13:32.782201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.421 [2024-12-10 04:13:32.787500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.421 [2024-12-10 04:13:32.787664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.421 [2024-12-10 04:13:32.787695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.421 [2024-12-10 04:13:32.792741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.421 [2024-12-10 04:13:32.792909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.421 [2024-12-10 04:13:32.792943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.421 [2024-12-10 04:13:32.798053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.421 [2024-12-10 04:13:32.798223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.421 [2024-12-10 04:13:32.798254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.682 [2024-12-10 04:13:32.803605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.682 [2024-12-10 04:13:32.803797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.682 [2024-12-10 04:13:32.803827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.682 [2024-12-10 04:13:32.808886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.682 [2024-12-10 04:13:32.809013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.682 [2024-12-10 04:13:32.809043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.682 [2024-12-10 04:13:32.814292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.682 [2024-12-10 04:13:32.814482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.682 [2024-12-10 04:13:32.814526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.682 [2024-12-10 04:13:32.819532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.682 [2024-12-10 04:13:32.819683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.682 [2024-12-10 04:13:32.819713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.682 [2024-12-10 04:13:32.824699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.682 [2024-12-10 04:13:32.824894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.682 [2024-12-10 04:13:32.824924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.682 [2024-12-10 04:13:32.829514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.682 [2024-12-10 04:13:32.829669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.682 [2024-12-10 04:13:32.829706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.682 [2024-12-10 04:13:32.833816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.682 [2024-12-10 04:13:32.833972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.682 [2024-12-10 04:13:32.834007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.682 [2024-12-10 04:13:32.838202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.682 [2024-12-10 04:13:32.838385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.682 [2024-12-10 04:13:32.838422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.842604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.842751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.842784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.847201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.847355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.847390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.851990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.852132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.852167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.857739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.857997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.858028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.864125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.864376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.864406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.869952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.870095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.870126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.875093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.875305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.875336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.880027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.880155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.880202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.884572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.884776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.884808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.889701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.889857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.889893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.894754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.894931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.894961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.899819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.900019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.900050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.905145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.905342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.905372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.911475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.911629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.911660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.916970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.917238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.917269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.922454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.922680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.922711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.927832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.927990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.928018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.933335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.933473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.933503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.938922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.939042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.939072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.944393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.944528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.944565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.949703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.949861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.949898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.955006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.955199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.955228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.960456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.960654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.960683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.965833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.965976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.966020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.971282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.971446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.971479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.976823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.977035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.977065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.982059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.982233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.982263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.683 [2024-12-10 04:13:32.987389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.683 [2024-12-10 04:13:32.987588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.683 [2024-12-10 04:13:32.987618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.684 [2024-12-10 04:13:32.992806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.684 [2024-12-10 04:13:32.992916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.684 [2024-12-10 04:13:32.992946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.684 [2024-12-10 04:13:32.998213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.684 [2024-12-10 04:13:32.998377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.684 [2024-12-10 04:13:32.998407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.684 [2024-12-10 04:13:33.003637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.684 [2024-12-10 04:13:33.003827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.684 [2024-12-10 04:13:33.003857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.684 [2024-12-10 04:13:33.008989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.684 [2024-12-10 04:13:33.009163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.684 [2024-12-10 04:13:33.009193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.684 [2024-12-10 04:13:33.014601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.684 [2024-12-10 04:13:33.014752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.684 [2024-12-10 04:13:33.014782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.684 [2024-12-10 04:13:33.020090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.684 [2024-12-10 04:13:33.020224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.684 [2024-12-10 04:13:33.020254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.684 [2024-12-10 04:13:33.025616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.684 [2024-12-10 04:13:33.025825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.684 [2024-12-10 04:13:33.025855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.684 [2024-12-10 04:13:33.031146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.684 [2024-12-10 04:13:33.031263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.684 [2024-12-10 04:13:33.031293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.684 [2024-12-10 04:13:33.036770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.684 [2024-12-10 04:13:33.036987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.684 [2024-12-10 04:13:33.037017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.684 [2024-12-10 04:13:33.042079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.684 [2024-12-10 04:13:33.042202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.684 [2024-12-10 04:13:33.042231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.684 [2024-12-10 04:13:33.047511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.684 [2024-12-10 04:13:33.047661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.684 [2024-12-10 04:13:33.047691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.684 [2024-12-10 04:13:33.053070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc62170) with pdu=0x200016eff3c8 00:25:38.684 [2024-12-10 04:13:33.053254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.684 [2024-12-10 04:13:33.053282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.684 6216.50 IOPS, 777.06 MiB/s 00:25:38.684 Latency(us) 00:25:38.684 [2024-12-10T03:13:33.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.684 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:38.684 nvme0n1 : 2.00 6212.44 776.55 0.00 0.00 2567.22 1868.99 7670.14 00:25:38.684 [2024-12-10T03:13:33.073Z] =================================================================================================================== 00:25:38.684 [2024-12-10T03:13:33.073Z] Total : 6212.44 776.55 0.00 0.00 2567.22 1868.99 7670.14 00:25:38.684 { 00:25:38.684 "results": [ 00:25:38.684 { 00:25:38.684 "job": "nvme0n1", 00:25:38.684 "core_mask": "0x2", 00:25:38.684 "workload": "randwrite", 00:25:38.684 "status": "finished", 00:25:38.684 "queue_depth": 16, 00:25:38.684 "io_size": 131072, 00:25:38.684 "runtime": 2.003722, 00:25:38.684 "iops": 6212.438651669244, 00:25:38.684 "mibps": 776.5548314586555, 00:25:38.684 "io_failed": 0, 00:25:38.684 "io_timeout": 0, 00:25:38.684 "avg_latency_us": 2567.2166504808147, 00:25:38.684 "min_latency_us": 1868.9896296296297, 00:25:38.684 "max_latency_us": 7670.139259259259 00:25:38.684 } 00:25:38.684 ], 00:25:38.684 "core_count": 1 00:25:38.684 } 00:25:38.943 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:38.943 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:38.943 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:38.943 | .driver_specific 00:25:38.943 | .nvme_error 00:25:38.943 | .status_code 00:25:38.943 | .command_transient_transport_error' 00:25:38.943 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:39.203 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 402 > 0 )) 00:25:39.203 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2499420 00:25:39.203 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2499420 ']' 00:25:39.203 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2499420 00:25:39.203 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:39.203 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.203 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2499420 00:25:39.203 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:39.203 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:39.203 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2499420' 00:25:39.203 killing process with pid 2499420 00:25:39.203 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2499420 00:25:39.203 Received shutdown signal, test time was about 2.000000 seconds 00:25:39.203 00:25:39.203 Latency(us) 00:25:39.203 [2024-12-10T03:13:33.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.203 [2024-12-10T03:13:33.592Z] =================================================================================================================== 00:25:39.203 [2024-12-10T03:13:33.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.203 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2499420 00:25:39.463 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2498042 00:25:39.463 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2498042 ']' 00:25:39.463 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2498042 00:25:39.463 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:39.463 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.463 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2498042 00:25:39.463 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:39.463 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:39.463 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2498042' 00:25:39.463 killing process with pid 2498042 00:25:39.463 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2498042 00:25:39.463 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2498042 00:25:39.723 00:25:39.723 real 0m15.383s 00:25:39.723 user 0m30.801s 00:25:39.723 sys 0m4.354s 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:39.723 ************************************ 00:25:39.723 END TEST nvmf_digest_error 00:25:39.723 ************************************ 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:39.723 rmmod nvme_tcp 00:25:39.723 rmmod nvme_fabrics 00:25:39.723 rmmod nvme_keyring 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2498042 ']' 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2498042 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2498042 ']' 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2498042 00:25:39.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2498042) - No such process 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2498042 is not found' 00:25:39.723 Process with pid 2498042 is not found 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.723 04:13:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:42.257 00:25:42.257 real 0m35.537s 00:25:42.257 user 1m2.888s 00:25:42.257 sys 0m10.227s 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:42.257 ************************************ 00:25:42.257 END TEST nvmf_digest 00:25:42.257 ************************************ 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.257 ************************************ 00:25:42.257 START TEST nvmf_bdevperf 00:25:42.257 ************************************ 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:42.257 * Looking for test storage... 00:25:42.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.257 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:42.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.258 --rc genhtml_branch_coverage=1 00:25:42.258 --rc genhtml_function_coverage=1 00:25:42.258 --rc genhtml_legend=1 00:25:42.258 --rc geninfo_all_blocks=1 00:25:42.258 --rc geninfo_unexecuted_blocks=1 00:25:42.258 00:25:42.258 ' 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:42.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.258 --rc genhtml_branch_coverage=1 00:25:42.258 --rc genhtml_function_coverage=1 00:25:42.258 --rc genhtml_legend=1 00:25:42.258 --rc geninfo_all_blocks=1 00:25:42.258 --rc geninfo_unexecuted_blocks=1 00:25:42.258 00:25:42.258 ' 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:42.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.258 --rc genhtml_branch_coverage=1 00:25:42.258 --rc genhtml_function_coverage=1 00:25:42.258 --rc genhtml_legend=1 00:25:42.258 --rc geninfo_all_blocks=1 00:25:42.258 --rc geninfo_unexecuted_blocks=1 00:25:42.258 00:25:42.258 ' 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:42.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.258 --rc genhtml_branch_coverage=1 00:25:42.258 --rc genhtml_function_coverage=1 00:25:42.258 --rc genhtml_legend=1 00:25:42.258 --rc geninfo_all_blocks=1 00:25:42.258 --rc geninfo_unexecuted_blocks=1 00:25:42.258 00:25:42.258 ' 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:42.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.258 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:42.259 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:42.259 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:42.259 04:13:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:44.161 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:44.161 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:44.161 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:44.161 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:44.161 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:44.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:44.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:25:44.162 00:25:44.162 --- 10.0.0.2 ping statistics --- 00:25:44.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.162 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:44.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:44.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:25:44.162 00:25:44.162 --- 10.0.0.1 ping statistics --- 00:25:44.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.162 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2501792 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2501792 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2501792 ']' 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:44.162 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:44.162 [2024-12-10 04:13:38.489373] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:44.162 [2024-12-10 04:13:38.489464] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.420 [2024-12-10 04:13:38.561887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:44.420 [2024-12-10 04:13:38.619202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:44.420 [2024-12-10 04:13:38.619254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:44.420 [2024-12-10 04:13:38.619282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:44.420 [2024-12-10 04:13:38.619293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:44.420 [2024-12-10 04:13:38.619302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:44.420 [2024-12-10 04:13:38.620880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:44.420 [2024-12-10 04:13:38.620924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:44.420 [2024-12-10 04:13:38.620927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.420 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:44.420 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:44.420 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:44.420 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:44.420 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:44.420 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:44.420 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:44.420 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.420 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:44.420 [2024-12-10 04:13:38.756099] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.420 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.420 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:44.420 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.420 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:44.420 Malloc0 00:25:44.420 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.420 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:44.421 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.421 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:44.421 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.421 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:44.421 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.421 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:44.681 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.681 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:44.681 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.681 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:44.681 [2024-12-10 04:13:38.813291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:44.681 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.681 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:44.681 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:44.681 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:44.681 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:44.681 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:44.681 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:44.681 { 00:25:44.681 "params": { 00:25:44.681 "name": "Nvme$subsystem", 00:25:44.681 "trtype": "$TEST_TRANSPORT", 00:25:44.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.681 "adrfam": "ipv4", 00:25:44.681 "trsvcid": "$NVMF_PORT", 00:25:44.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.681 "hdgst": ${hdgst:-false}, 00:25:44.681 "ddgst": ${ddgst:-false} 00:25:44.681 }, 00:25:44.681 "method": "bdev_nvme_attach_controller" 00:25:44.681 } 00:25:44.681 EOF 00:25:44.681 )") 00:25:44.681 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:44.681 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:44.681 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:44.681 04:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:44.681 "params": { 00:25:44.681 "name": "Nvme1", 00:25:44.681 "trtype": "tcp", 00:25:44.681 "traddr": "10.0.0.2", 00:25:44.681 "adrfam": "ipv4", 00:25:44.681 "trsvcid": "4420", 00:25:44.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:44.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:44.681 "hdgst": false, 00:25:44.681 "ddgst": false 00:25:44.681 }, 00:25:44.681 "method": "bdev_nvme_attach_controller" 00:25:44.681 }' 00:25:44.681 [2024-12-10 04:13:38.862474] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:44.681 [2024-12-10 04:13:38.862582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2501816 ] 00:25:44.681 [2024-12-10 04:13:38.935054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.681 [2024-12-10 04:13:38.994564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.941 Running I/O for 1 seconds... 00:25:46.138 8410.00 IOPS, 32.85 MiB/s 00:25:46.138 Latency(us) 00:25:46.138 [2024-12-10T03:13:40.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.138 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:46.138 Verification LBA range: start 0x0 length 0x4000 00:25:46.138 Nvme1n1 : 1.04 8180.22 31.95 0.00 0.00 15002.80 3228.25 43884.85 00:25:46.138 [2024-12-10T03:13:40.527Z] =================================================================================================================== 00:25:46.138 [2024-12-10T03:13:40.527Z] Total : 8180.22 31.95 0.00 0.00 15002.80 3228.25 43884.85 00:25:46.138 04:13:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2502079 00:25:46.138 04:13:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:46.138 04:13:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:46.138 04:13:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:46.138 04:13:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:46.138 04:13:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:46.138 04:13:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.138 04:13:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.138 { 00:25:46.138 "params": { 00:25:46.138 "name": "Nvme$subsystem", 00:25:46.138 "trtype": "$TEST_TRANSPORT", 00:25:46.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.138 "adrfam": "ipv4", 00:25:46.138 "trsvcid": "$NVMF_PORT", 00:25:46.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.138 "hdgst": ${hdgst:-false}, 00:25:46.138 "ddgst": ${ddgst:-false} 00:25:46.138 }, 00:25:46.138 "method": "bdev_nvme_attach_controller" 00:25:46.138 } 00:25:46.138 EOF 00:25:46.138 )") 00:25:46.138 04:13:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:46.138 04:13:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:46.138 04:13:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:46.138 04:13:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:46.138 "params": { 00:25:46.138 "name": "Nvme1", 00:25:46.138 "trtype": "tcp", 00:25:46.138 "traddr": "10.0.0.2", 00:25:46.138 "adrfam": "ipv4", 00:25:46.138 "trsvcid": "4420", 00:25:46.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:46.138 "hdgst": false, 00:25:46.138 "ddgst": false 00:25:46.138 }, 00:25:46.138 "method": "bdev_nvme_attach_controller" 00:25:46.138 }' 00:25:46.396 [2024-12-10 04:13:40.533883] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:46.396 [2024-12-10 04:13:40.533959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2502079 ] 00:25:46.396 [2024-12-10 04:13:40.604620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.396 [2024-12-10 04:13:40.663033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.655 Running I/O for 15 seconds... 00:25:48.969 8392.00 IOPS, 32.78 MiB/s [2024-12-10T03:13:43.621Z] 8561.50 IOPS, 33.44 MiB/s [2024-12-10T03:13:43.621Z] 04:13:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2501792 00:25:49.232 04:13:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:49.232 [2024-12-10 04:13:43.500143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.232 [2024-12-10 04:13:43.500197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.500970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.500983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.501013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.501025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.501039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.501052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.501069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.501082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.501096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.501109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.501122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.501134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.232 [2024-12-10 04:13:43.501148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.232 [2024-12-10 04:13:43.501160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.501977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.501989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.502002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.502014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.502028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.233 [2024-12-10 04:13:43.502040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.502053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.233 [2024-12-10 04:13:43.502065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.502079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.233 [2024-12-10 04:13:43.502091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.502104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.233 [2024-12-10 04:13:43.502116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.502131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.233 [2024-12-10 04:13:43.502143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.502159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.233 [2024-12-10 04:13:43.502172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.502186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.233 [2024-12-10 04:13:43.502198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.502212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.233 [2024-12-10 04:13:43.502224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.502238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.233 [2024-12-10 04:13:43.502250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.502264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.502276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.233 [2024-12-10 04:13:43.502290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.233 [2024-12-10 04:13:43.502303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.502985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.502999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.234 [2024-12-10 04:13:43.503457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.234 [2024-12-10 04:13:43.503469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.503482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.235 [2024-12-10 04:13:43.503494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.503507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.235 [2024-12-10 04:13:43.503519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.503556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.235 [2024-12-10 04:13:43.503572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.503588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.235 [2024-12-10 04:13:43.503609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.503632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.235 [2024-12-10 04:13:43.503646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.503661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.235 [2024-12-10 04:13:43.503675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.503690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.235 [2024-12-10 04:13:43.503709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.503724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.235 [2024-12-10 04:13:43.503738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.503753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.235 [2024-12-10 04:13:43.503767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.503782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.235 [2024-12-10 04:13:43.503796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.503811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.235 [2024-12-10 04:13:43.503825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.503859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.235 [2024-12-10 04:13:43.503872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.503886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.235 [2024-12-10 04:13:43.503899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.503928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.235 [2024-12-10 04:13:43.503940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.503953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.235 [2024-12-10 04:13:43.503965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.503979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.235 [2024-12-10 04:13:43.503990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.504007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.235 [2024-12-10 04:13:43.504019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.504033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e153a0 is same with the state(6) to be set 00:25:49.235 [2024-12-10 04:13:43.504049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:49.235 [2024-12-10 04:13:43.504059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:49.235 [2024-12-10 04:13:43.504069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38944 len:8 PRP1 0x0 PRP2 0x0 00:25:49.235 [2024-12-10 04:13:43.504080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.504217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.235 [2024-12-10 04:13:43.504238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.504252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.235 [2024-12-10 04:13:43.504283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.504305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.235 [2024-12-10 04:13:43.504319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.504332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.235 [2024-12-10 04:13:43.504345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.235 [2024-12-10 04:13:43.504356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.235 [2024-12-10 04:13:43.507567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.235 [2024-12-10 04:13:43.507627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.235 [2024-12-10 04:13:43.508274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-12-10 04:13:43.508303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.235 [2024-12-10 04:13:43.508319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.235 [2024-12-10 04:13:43.508567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.235 [2024-12-10 04:13:43.508791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.235 [2024-12-10 04:13:43.508813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.235 [2024-12-10 04:13:43.508829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.235 [2024-12-10 04:13:43.508851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.235 [2024-12-10 04:13:43.521117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.235 [2024-12-10 04:13:43.521491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-12-10 04:13:43.521538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.235 [2024-12-10 04:13:43.521563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.235 [2024-12-10 04:13:43.521820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.235 [2024-12-10 04:13:43.522048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.235 [2024-12-10 04:13:43.522066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.235 [2024-12-10 04:13:43.522078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.235 [2024-12-10 04:13:43.522089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.235 [2024-12-10 04:13:43.534384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.235 [2024-12-10 04:13:43.534761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-12-10 04:13:43.534791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.235 [2024-12-10 04:13:43.534807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.235 [2024-12-10 04:13:43.535052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.235 [2024-12-10 04:13:43.535246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.235 [2024-12-10 04:13:43.535264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.235 [2024-12-10 04:13:43.535276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.235 [2024-12-10 04:13:43.535287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.235 [2024-12-10 04:13:43.547512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.235 [2024-12-10 04:13:43.547863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-12-10 04:13:43.547891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.235 [2024-12-10 04:13:43.547907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.235 [2024-12-10 04:13:43.548131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.235 [2024-12-10 04:13:43.548342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.235 [2024-12-10 04:13:43.548360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.235 [2024-12-10 04:13:43.548372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.235 [2024-12-10 04:13:43.548383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.235 [2024-12-10 04:13:43.560697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.235 [2024-12-10 04:13:43.561143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-12-10 04:13:43.561186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.236 [2024-12-10 04:13:43.561202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.236 [2024-12-10 04:13:43.561451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.236 [2024-12-10 04:13:43.561694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.236 [2024-12-10 04:13:43.561715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.236 [2024-12-10 04:13:43.561728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.236 [2024-12-10 04:13:43.561740] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.236 [2024-12-10 04:13:43.573820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.236 [2024-12-10 04:13:43.574192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-12-10 04:13:43.574220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.236 [2024-12-10 04:13:43.574236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.236 [2024-12-10 04:13:43.574473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.236 [2024-12-10 04:13:43.574715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.236 [2024-12-10 04:13:43.574735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.236 [2024-12-10 04:13:43.574748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.236 [2024-12-10 04:13:43.574760] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.236 [2024-12-10 04:13:43.586920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.236 [2024-12-10 04:13:43.587289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-12-10 04:13:43.587332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.236 [2024-12-10 04:13:43.587348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.236 [2024-12-10 04:13:43.587632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.236 [2024-12-10 04:13:43.587839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.236 [2024-12-10 04:13:43.587873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.236 [2024-12-10 04:13:43.587885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.236 [2024-12-10 04:13:43.587896] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.236 [2024-12-10 04:13:43.599947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.236 [2024-12-10 04:13:43.600443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-12-10 04:13:43.600483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.236 [2024-12-10 04:13:43.600499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.236 [2024-12-10 04:13:43.600767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.236 [2024-12-10 04:13:43.600998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.236 [2024-12-10 04:13:43.601021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.236 [2024-12-10 04:13:43.601034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.236 [2024-12-10 04:13:43.601045] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.498 [2024-12-10 04:13:43.613167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.498 [2024-12-10 04:13:43.613613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.498 [2024-12-10 04:13:43.613656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.498 [2024-12-10 04:13:43.613673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.498 [2024-12-10 04:13:43.613916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.498 [2024-12-10 04:13:43.614124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.498 [2024-12-10 04:13:43.614142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.498 [2024-12-10 04:13:43.614155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.498 [2024-12-10 04:13:43.614166] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.498 [2024-12-10 04:13:43.626432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.498 [2024-12-10 04:13:43.626836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.498 [2024-12-10 04:13:43.626879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.498 [2024-12-10 04:13:43.626894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.498 [2024-12-10 04:13:43.627131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.498 [2024-12-10 04:13:43.627340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.498 [2024-12-10 04:13:43.627358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.498 [2024-12-10 04:13:43.627370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.498 [2024-12-10 04:13:43.627382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.498 [2024-12-10 04:13:43.639493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.498 [2024-12-10 04:13:43.639884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.498 [2024-12-10 04:13:43.639928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.498 [2024-12-10 04:13:43.639943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.498 [2024-12-10 04:13:43.640179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.498 [2024-12-10 04:13:43.640388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.498 [2024-12-10 04:13:43.640406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.499 [2024-12-10 04:13:43.640418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.499 [2024-12-10 04:13:43.640435] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.499 [2024-12-10 04:13:43.652510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.499 [2024-12-10 04:13:43.652878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.499 [2024-12-10 04:13:43.652906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.499 [2024-12-10 04:13:43.652922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.499 [2024-12-10 04:13:43.653144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.499 [2024-12-10 04:13:43.653355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.499 [2024-12-10 04:13:43.653373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.499 [2024-12-10 04:13:43.653385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.499 [2024-12-10 04:13:43.653396] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.499 [2024-12-10 04:13:43.665534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.499 [2024-12-10 04:13:43.665910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.499 [2024-12-10 04:13:43.665937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.499 [2024-12-10 04:13:43.665953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.499 [2024-12-10 04:13:43.666189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.499 [2024-12-10 04:13:43.666399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.499 [2024-12-10 04:13:43.666418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.499 [2024-12-10 04:13:43.666430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.499 [2024-12-10 04:13:43.666441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.499 [2024-12-10 04:13:43.678589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.499 [2024-12-10 04:13:43.678961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.499 [2024-12-10 04:13:43.679003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.499 [2024-12-10 04:13:43.679018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.499 [2024-12-10 04:13:43.679266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.499 [2024-12-10 04:13:43.679460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.499 [2024-12-10 04:13:43.679478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.499 [2024-12-10 04:13:43.679490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.499 [2024-12-10 04:13:43.679501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.499 [2024-12-10 04:13:43.691875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.499 [2024-12-10 04:13:43.692273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.499 [2024-12-10 04:13:43.692304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.499 [2024-12-10 04:13:43.692320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.499 [2024-12-10 04:13:43.692538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.499 [2024-12-10 04:13:43.692749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.499 [2024-12-10 04:13:43.692768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.499 [2024-12-10 04:13:43.692781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.499 [2024-12-10 04:13:43.692793] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.499 [2024-12-10 04:13:43.705080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.499 [2024-12-10 04:13:43.705585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.499 [2024-12-10 04:13:43.705626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.499 [2024-12-10 04:13:43.705642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.499 [2024-12-10 04:13:43.705876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.499 [2024-12-10 04:13:43.706077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.499 [2024-12-10 04:13:43.706096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.499 [2024-12-10 04:13:43.706108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.499 [2024-12-10 04:13:43.706120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.499 [2024-12-10 04:13:43.718297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.499 [2024-12-10 04:13:43.718703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.499 [2024-12-10 04:13:43.718746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.499 [2024-12-10 04:13:43.718761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.499 [2024-12-10 04:13:43.719017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.499 [2024-12-10 04:13:43.719227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.499 [2024-12-10 04:13:43.719246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.499 [2024-12-10 04:13:43.719258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.499 [2024-12-10 04:13:43.719269] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.499 [2024-12-10 04:13:43.731518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.499 [2024-12-10 04:13:43.731963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.499 [2024-12-10 04:13:43.731991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.499 [2024-12-10 04:13:43.732007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.499 [2024-12-10 04:13:43.732243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.499 [2024-12-10 04:13:43.732461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.499 [2024-12-10 04:13:43.732480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.499 [2024-12-10 04:13:43.732492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.499 [2024-12-10 04:13:43.732504] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.499 [2024-12-10 04:13:43.744698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.499 [2024-12-10 04:13:43.745044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.499 [2024-12-10 04:13:43.745072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.499 [2024-12-10 04:13:43.745088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.499 [2024-12-10 04:13:43.745318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.499 [2024-12-10 04:13:43.745533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.499 [2024-12-10 04:13:43.745576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.499 [2024-12-10 04:13:43.745590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.499 [2024-12-10 04:13:43.745603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.499 [2024-12-10 04:13:43.757996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.499 [2024-12-10 04:13:43.758391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.499 [2024-12-10 04:13:43.758420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.499 [2024-12-10 04:13:43.758436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.499 [2024-12-10 04:13:43.758676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.499 [2024-12-10 04:13:43.758919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.499 [2024-12-10 04:13:43.758939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.499 [2024-12-10 04:13:43.758952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.499 [2024-12-10 04:13:43.758965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.499 [2024-12-10 04:13:43.772034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.500 [2024-12-10 04:13:43.772459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.500 [2024-12-10 04:13:43.772497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.500 [2024-12-10 04:13:43.772530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.500 [2024-12-10 04:13:43.772755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.500 [2024-12-10 04:13:43.772990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.500 [2024-12-10 04:13:43.773017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.500 [2024-12-10 04:13:43.773046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.500 [2024-12-10 04:13:43.773059] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.500 [2024-12-10 04:13:43.785458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.500 [2024-12-10 04:13:43.785900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.500 [2024-12-10 04:13:43.785927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.500 [2024-12-10 04:13:43.785957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.500 [2024-12-10 04:13:43.786195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.500 [2024-12-10 04:13:43.786389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.500 [2024-12-10 04:13:43.786407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.500 [2024-12-10 04:13:43.786419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.500 [2024-12-10 04:13:43.786430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.500 [2024-12-10 04:13:43.798955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.500 [2024-12-10 04:13:43.799329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.500 [2024-12-10 04:13:43.799371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.500 [2024-12-10 04:13:43.799387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.500 [2024-12-10 04:13:43.799643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.500 [2024-12-10 04:13:43.799870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.500 [2024-12-10 04:13:43.799890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.500 [2024-12-10 04:13:43.799903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.500 [2024-12-10 04:13:43.799930] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.500 [2024-12-10 04:13:43.812338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.500 [2024-12-10 04:13:43.812721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.500 [2024-12-10 04:13:43.812751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.500 [2024-12-10 04:13:43.812767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.500 [2024-12-10 04:13:43.813000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.500 [2024-12-10 04:13:43.813216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.500 [2024-12-10 04:13:43.813235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.500 [2024-12-10 04:13:43.813250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.500 [2024-12-10 04:13:43.813268] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.500 [2024-12-10 04:13:43.825862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.500 [2024-12-10 04:13:43.826275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.500 [2024-12-10 04:13:43.826331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.500 [2024-12-10 04:13:43.826347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.500 [2024-12-10 04:13:43.826594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.500 [2024-12-10 04:13:43.826808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.500 [2024-12-10 04:13:43.826842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.500 [2024-12-10 04:13:43.826855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.500 [2024-12-10 04:13:43.826867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.500 [2024-12-10 04:13:43.839171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.500 [2024-12-10 04:13:43.839611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.500 [2024-12-10 04:13:43.839640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.500 [2024-12-10 04:13:43.839656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.500 [2024-12-10 04:13:43.839886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.500 [2024-12-10 04:13:43.840102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.500 [2024-12-10 04:13:43.840121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.500 [2024-12-10 04:13:43.840133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.500 [2024-12-10 04:13:43.840145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.500 [2024-12-10 04:13:43.852465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.500 [2024-12-10 04:13:43.852854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.500 [2024-12-10 04:13:43.852883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.500 [2024-12-10 04:13:43.852899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.500 [2024-12-10 04:13:43.853129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.500 [2024-12-10 04:13:43.853345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.500 [2024-12-10 04:13:43.853364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.500 [2024-12-10 04:13:43.853376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.500 [2024-12-10 04:13:43.853387] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.500 [2024-12-10 04:13:43.865664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.500 [2024-12-10 04:13:43.866060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.500 [2024-12-10 04:13:43.866095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.500 [2024-12-10 04:13:43.866127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.500 [2024-12-10 04:13:43.866370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.500 [2024-12-10 04:13:43.866600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.500 [2024-12-10 04:13:43.866622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.500 [2024-12-10 04:13:43.866635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.500 [2024-12-10 04:13:43.866647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.500 [2024-12-10 04:13:43.878976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.500 [2024-12-10 04:13:43.879393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.500 [2024-12-10 04:13:43.879421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.500 [2024-12-10 04:13:43.879437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.762 [2024-12-10 04:13:43.879679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.762 [2024-12-10 04:13:43.879906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.762 [2024-12-10 04:13:43.879926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.762 [2024-12-10 04:13:43.879939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.762 [2024-12-10 04:13:43.879951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.762 [2024-12-10 04:13:43.892256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.762 [2024-12-10 04:13:43.892659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.762 [2024-12-10 04:13:43.892688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.762 [2024-12-10 04:13:43.892705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.762 [2024-12-10 04:13:43.892936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.762 [2024-12-10 04:13:43.893152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.762 [2024-12-10 04:13:43.893171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.763 [2024-12-10 04:13:43.893183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.763 [2024-12-10 04:13:43.893196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.763 [2024-12-10 04:13:43.905506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.763 [2024-12-10 04:13:43.905912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.763 [2024-12-10 04:13:43.905941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.763 [2024-12-10 04:13:43.905958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.763 [2024-12-10 04:13:43.906195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.763 [2024-12-10 04:13:43.906428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.763 [2024-12-10 04:13:43.906447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.763 [2024-12-10 04:13:43.906459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.763 [2024-12-10 04:13:43.906471] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.763 [2024-12-10 04:13:43.918899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.763 [2024-12-10 04:13:43.919232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.763 [2024-12-10 04:13:43.919260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.763 [2024-12-10 04:13:43.919275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.763 [2024-12-10 04:13:43.919499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.763 [2024-12-10 04:13:43.919748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.763 [2024-12-10 04:13:43.919769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.763 [2024-12-10 04:13:43.919782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.763 [2024-12-10 04:13:43.919794] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.763 [2024-12-10 04:13:43.932169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.763 [2024-12-10 04:13:43.932540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.763 [2024-12-10 04:13:43.932574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.763 [2024-12-10 04:13:43.932591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.763 [2024-12-10 04:13:43.932821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.763 [2024-12-10 04:13:43.933054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.763 [2024-12-10 04:13:43.933073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.763 [2024-12-10 04:13:43.933086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.763 [2024-12-10 04:13:43.933097] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.763 [2024-12-10 04:13:43.945380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.763 [2024-12-10 04:13:43.945735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.763 [2024-12-10 04:13:43.945763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.763 [2024-12-10 04:13:43.945779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.763 [2024-12-10 04:13:43.946011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.763 [2024-12-10 04:13:43.946228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.763 [2024-12-10 04:13:43.946252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.763 [2024-12-10 04:13:43.946265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.763 [2024-12-10 04:13:43.946276] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.763 [2024-12-10 04:13:43.958773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.763 [2024-12-10 04:13:43.959168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.763 [2024-12-10 04:13:43.959195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.763 [2024-12-10 04:13:43.959211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.763 [2024-12-10 04:13:43.959446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.763 [2024-12-10 04:13:43.959695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.763 [2024-12-10 04:13:43.959716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.763 [2024-12-10 04:13:43.959730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.763 [2024-12-10 04:13:43.959742] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.763 [2024-12-10 04:13:43.972010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.763 [2024-12-10 04:13:43.972394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.763 [2024-12-10 04:13:43.972436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.763 [2024-12-10 04:13:43.972452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.763 [2024-12-10 04:13:43.972709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.763 [2024-12-10 04:13:43.972933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.763 [2024-12-10 04:13:43.972951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.763 [2024-12-10 04:13:43.972963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.763 [2024-12-10 04:13:43.972975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.763 [2024-12-10 04:13:43.985324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.763 [2024-12-10 04:13:43.985771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.763 [2024-12-10 04:13:43.985800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.763 [2024-12-10 04:13:43.985816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.763 [2024-12-10 04:13:43.986047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.763 [2024-12-10 04:13:43.986263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.763 [2024-12-10 04:13:43.986282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.763 [2024-12-10 04:13:43.986294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.763 [2024-12-10 04:13:43.986306] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.763 [2024-12-10 04:13:43.998590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.763 [2024-12-10 04:13:43.998983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.763 [2024-12-10 04:13:43.999011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.763 [2024-12-10 04:13:43.999026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.763 [2024-12-10 04:13:43.999257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.763 [2024-12-10 04:13:43.999472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.763 [2024-12-10 04:13:43.999491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.763 [2024-12-10 04:13:43.999503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.763 [2024-12-10 04:13:43.999515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.763 [2024-12-10 04:13:44.011938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.763 [2024-12-10 04:13:44.012308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.763 [2024-12-10 04:13:44.012335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.763 [2024-12-10 04:13:44.012352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.763 [2024-12-10 04:13:44.012577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.763 [2024-12-10 04:13:44.012798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.763 [2024-12-10 04:13:44.012818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.763 [2024-12-10 04:13:44.012832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.763 [2024-12-10 04:13:44.012845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.763 7078.67 IOPS, 27.65 MiB/s [2024-12-10T03:13:44.152Z] [2024-12-10 04:13:44.026699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.763 [2024-12-10 04:13:44.027069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.763 [2024-12-10 04:13:44.027098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.763 [2024-12-10 04:13:44.027114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.763 [2024-12-10 04:13:44.027345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.763 [2024-12-10 04:13:44.027589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.764 [2024-12-10 04:13:44.027626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.764 [2024-12-10 04:13:44.027639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.764 [2024-12-10 04:13:44.027652] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.764 [2024-12-10 04:13:44.040021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.764 [2024-12-10 04:13:44.040399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.764 [2024-12-10 04:13:44.040432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.764 [2024-12-10 04:13:44.040449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.764 [2024-12-10 04:13:44.040677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.764 [2024-12-10 04:13:44.040924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.764 [2024-12-10 04:13:44.040943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.764 [2024-12-10 04:13:44.040955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.764 [2024-12-10 04:13:44.040967] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.764 [2024-12-10 04:13:44.053389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.764 [2024-12-10 04:13:44.053752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.764 [2024-12-10 04:13:44.053796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.764 [2024-12-10 04:13:44.053811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.764 [2024-12-10 04:13:44.054054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.764 [2024-12-10 04:13:44.054269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.764 [2024-12-10 04:13:44.054288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.764 [2024-12-10 04:13:44.054301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.764 [2024-12-10 04:13:44.054312] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.764 [2024-12-10 04:13:44.066733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.764 [2024-12-10 04:13:44.067122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.764 [2024-12-10 04:13:44.067164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.764 [2024-12-10 04:13:44.067180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.764 [2024-12-10 04:13:44.067434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.764 [2024-12-10 04:13:44.067679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.764 [2024-12-10 04:13:44.067699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.764 [2024-12-10 04:13:44.067712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.764 [2024-12-10 04:13:44.067724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.764 [2024-12-10 04:13:44.080034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.764 [2024-12-10 04:13:44.080481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.764 [2024-12-10 04:13:44.080509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.764 [2024-12-10 04:13:44.080525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.764 [2024-12-10 04:13:44.080769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.764 [2024-12-10 04:13:44.081004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.764 [2024-12-10 04:13:44.081023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.764 [2024-12-10 04:13:44.081036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.764 [2024-12-10 04:13:44.081048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.764 [2024-12-10 04:13:44.093413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.764 [2024-12-10 04:13:44.093809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.764 [2024-12-10 04:13:44.093838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.764 [2024-12-10 04:13:44.093854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.764 [2024-12-10 04:13:44.094099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.764 [2024-12-10 04:13:44.094299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.764 [2024-12-10 04:13:44.094318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.764 [2024-12-10 04:13:44.094330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.764 [2024-12-10 04:13:44.094342] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.764 [2024-12-10 04:13:44.106813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.764 [2024-12-10 04:13:44.107234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.764 [2024-12-10 04:13:44.107262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.764 [2024-12-10 04:13:44.107278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.764 [2024-12-10 04:13:44.107507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.764 [2024-12-10 04:13:44.107757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.764 [2024-12-10 04:13:44.107778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.764 [2024-12-10 04:13:44.107791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.764 [2024-12-10 04:13:44.107804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.764 [2024-12-10 04:13:44.120113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.764 [2024-12-10 04:13:44.120485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.764 [2024-12-10 04:13:44.120527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.764 [2024-12-10 04:13:44.120542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.764 [2024-12-10 04:13:44.120795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.764 [2024-12-10 04:13:44.121029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.764 [2024-12-10 04:13:44.121053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.764 [2024-12-10 04:13:44.121066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.764 [2024-12-10 04:13:44.121078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.764 [2024-12-10 04:13:44.133396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.764 [2024-12-10 04:13:44.133765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.764 [2024-12-10 04:13:44.133793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:49.764 [2024-12-10 04:13:44.133808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:49.764 [2024-12-10 04:13:44.134039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:49.764 [2024-12-10 04:13:44.134255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.764 [2024-12-10 04:13:44.134273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.764 [2024-12-10 04:13:44.134286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.764 [2024-12-10 04:13:44.134297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.026 [2024-12-10 04:13:44.146799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.026 [2024-12-10 04:13:44.147149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.026 [2024-12-10 04:13:44.147178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.026 [2024-12-10 04:13:44.147195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.026 [2024-12-10 04:13:44.147425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.026 [2024-12-10 04:13:44.147689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.026 [2024-12-10 04:13:44.147710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.026 [2024-12-10 04:13:44.147724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.026 [2024-12-10 04:13:44.147737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.026 [2024-12-10 04:13:44.160254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.026 [2024-12-10 04:13:44.160600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.026 [2024-12-10 04:13:44.160630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.026 [2024-12-10 04:13:44.160646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.026 [2024-12-10 04:13:44.160894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.027 [2024-12-10 04:13:44.161110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-10 04:13:44.161129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-10 04:13:44.161141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-10 04:13:44.161176] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-10 04:13:44.173747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-10 04:13:44.174135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-10 04:13:44.174177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-10 04:13:44.174193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.027 [2024-12-10 04:13:44.174443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.027 [2024-12-10 04:13:44.174675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-10 04:13:44.174696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-10 04:13:44.174709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-10 04:13:44.174721] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-10 04:13:44.187065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-10 04:13:44.187506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-10 04:13:44.187535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-10 04:13:44.187561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.027 [2024-12-10 04:13:44.187808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.027 [2024-12-10 04:13:44.188032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-10 04:13:44.188051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-10 04:13:44.188064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-10 04:13:44.188076] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-10 04:13:44.200324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-10 04:13:44.200692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-10 04:13:44.200735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-10 04:13:44.200751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.027 [2024-12-10 04:13:44.200993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.027 [2024-12-10 04:13:44.201209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-10 04:13:44.201227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-10 04:13:44.201240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-10 04:13:44.201252] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-10 04:13:44.213733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-10 04:13:44.214126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-10 04:13:44.214158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-10 04:13:44.214175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.027 [2024-12-10 04:13:44.214412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.027 [2024-12-10 04:13:44.214642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-10 04:13:44.214663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-10 04:13:44.214676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-10 04:13:44.214689] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-10 04:13:44.227074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-10 04:13:44.227447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-10 04:13:44.227490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-10 04:13:44.227505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.027 [2024-12-10 04:13:44.227761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.027 [2024-12-10 04:13:44.227998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-10 04:13:44.228017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-10 04:13:44.228029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-10 04:13:44.228041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-10 04:13:44.240416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-10 04:13:44.240881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-10 04:13:44.240923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-10 04:13:44.240940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.027 [2024-12-10 04:13:44.241181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.027 [2024-12-10 04:13:44.241381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-10 04:13:44.241399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-10 04:13:44.241412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-10 04:13:44.241423] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-10 04:13:44.253812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-10 04:13:44.254265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-10 04:13:44.254293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-10 04:13:44.254308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.027 [2024-12-10 04:13:44.254566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.027 [2024-12-10 04:13:44.254793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-10 04:13:44.254814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-10 04:13:44.254827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-10 04:13:44.254839] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-10 04:13:44.267041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-10 04:13:44.267433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-10 04:13:44.267461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-10 04:13:44.267477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.027 [2024-12-10 04:13:44.267703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.027 [2024-12-10 04:13:44.267936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-10 04:13:44.267956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-10 04:13:44.267970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-10 04:13:44.267982] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-10 04:13:44.280483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-10 04:13:44.280855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-10 04:13:44.280884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-10 04:13:44.280900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.027 [2024-12-10 04:13:44.281133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.027 [2024-12-10 04:13:44.281350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-10 04:13:44.281368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-10 04:13:44.281381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-10 04:13:44.281392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-10 04:13:44.293866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-10 04:13:44.294274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-10 04:13:44.294303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-10 04:13:44.294319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.028 [2024-12-10 04:13:44.294559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.028 [2024-12-10 04:13:44.294773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.028 [2024-12-10 04:13:44.294798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.028 [2024-12-10 04:13:44.294811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.028 [2024-12-10 04:13:44.294838] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.028 [2024-12-10 04:13:44.307156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.028 [2024-12-10 04:13:44.307528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.028 [2024-12-10 04:13:44.307577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.028 [2024-12-10 04:13:44.307595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.028 [2024-12-10 04:13:44.307837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.028 [2024-12-10 04:13:44.308053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.028 [2024-12-10 04:13:44.308072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.028 [2024-12-10 04:13:44.308085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.028 [2024-12-10 04:13:44.308096] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.028 [2024-12-10 04:13:44.320489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.028 [2024-12-10 04:13:44.320874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.028 [2024-12-10 04:13:44.320902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.028 [2024-12-10 04:13:44.320918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.028 [2024-12-10 04:13:44.321163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.028 [2024-12-10 04:13:44.321363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.028 [2024-12-10 04:13:44.321382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.028 [2024-12-10 04:13:44.321394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.028 [2024-12-10 04:13:44.321406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.028 [2024-12-10 04:13:44.333829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.028 [2024-12-10 04:13:44.334240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.028 [2024-12-10 04:13:44.334267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.028 [2024-12-10 04:13:44.334282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.028 [2024-12-10 04:13:44.334506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.028 [2024-12-10 04:13:44.334740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.028 [2024-12-10 04:13:44.334761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.028 [2024-12-10 04:13:44.334774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.028 [2024-12-10 04:13:44.334792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.028 [2024-12-10 04:13:44.347130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.028 [2024-12-10 04:13:44.347525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.028 [2024-12-10 04:13:44.347576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.028 [2024-12-10 04:13:44.347593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.028 [2024-12-10 04:13:44.347824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.028 [2024-12-10 04:13:44.348041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.028 [2024-12-10 04:13:44.348059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.028 [2024-12-10 04:13:44.348071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.028 [2024-12-10 04:13:44.348083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.028 [2024-12-10 04:13:44.360420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.028 [2024-12-10 04:13:44.360817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.028 [2024-12-10 04:13:44.360860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.028 [2024-12-10 04:13:44.360875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.028 [2024-12-10 04:13:44.361112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.028 [2024-12-10 04:13:44.361311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.028 [2024-12-10 04:13:44.361330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.028 [2024-12-10 04:13:44.361342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.028 [2024-12-10 04:13:44.361354] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.028 [2024-12-10 04:13:44.373673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.028 [2024-12-10 04:13:44.374132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.028 [2024-12-10 04:13:44.374160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.028 [2024-12-10 04:13:44.374175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.028 [2024-12-10 04:13:44.374419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.028 [2024-12-10 04:13:44.374666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.028 [2024-12-10 04:13:44.374687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.028 [2024-12-10 04:13:44.374700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.028 [2024-12-10 04:13:44.374712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.028 [2024-12-10 04:13:44.387003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.028 [2024-12-10 04:13:44.387357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.028 [2024-12-10 04:13:44.387390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.028 [2024-12-10 04:13:44.387406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.028 [2024-12-10 04:13:44.387647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.028 [2024-12-10 04:13:44.387875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.028 [2024-12-10 04:13:44.387910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.028 [2024-12-10 04:13:44.387922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.028 [2024-12-10 04:13:44.387934] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.028 [2024-12-10 04:13:44.400253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.028 [2024-12-10 04:13:44.400564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.028 [2024-12-10 04:13:44.400606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.028 [2024-12-10 04:13:44.400621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.028 [2024-12-10 04:13:44.400837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.028 [2024-12-10 04:13:44.401057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.028 [2024-12-10 04:13:44.401075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.028 [2024-12-10 04:13:44.401088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.028 [2024-12-10 04:13:44.401099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.290 [2024-12-10 04:13:44.413645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.290 [2024-12-10 04:13:44.414021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-12-10 04:13:44.414050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.290 [2024-12-10 04:13:44.414066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.290 [2024-12-10 04:13:44.414315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.290 [2024-12-10 04:13:44.414521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.290 [2024-12-10 04:13:44.414566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.290 [2024-12-10 04:13:44.414581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.290 [2024-12-10 04:13:44.414598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.290 [2024-12-10 04:13:44.427017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.290 [2024-12-10 04:13:44.427396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-12-10 04:13:44.427438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.290 [2024-12-10 04:13:44.427454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.290 [2024-12-10 04:13:44.427717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.290 [2024-12-10 04:13:44.427958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.290 [2024-12-10 04:13:44.427976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.290 [2024-12-10 04:13:44.427989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.290 [2024-12-10 04:13:44.428001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.290 [2024-12-10 04:13:44.440243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.290 [2024-12-10 04:13:44.440662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-12-10 04:13:44.440690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.290 [2024-12-10 04:13:44.440707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.290 [2024-12-10 04:13:44.440951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.290 [2024-12-10 04:13:44.441152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.290 [2024-12-10 04:13:44.441170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.290 [2024-12-10 04:13:44.441182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.290 [2024-12-10 04:13:44.441194] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.290 [2024-12-10 04:13:44.453558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.290 [2024-12-10 04:13:44.453941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-12-10 04:13:44.453968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.290 [2024-12-10 04:13:44.453984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.290 [2024-12-10 04:13:44.454229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.290 [2024-12-10 04:13:44.454428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.290 [2024-12-10 04:13:44.454447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.290 [2024-12-10 04:13:44.454459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.290 [2024-12-10 04:13:44.454471] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.290 [2024-12-10 04:13:44.466871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.290 [2024-12-10 04:13:44.467244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-12-10 04:13:44.467286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.290 [2024-12-10 04:13:44.467302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.290 [2024-12-10 04:13:44.467584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.290 [2024-12-10 04:13:44.467797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.290 [2024-12-10 04:13:44.467822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.290 [2024-12-10 04:13:44.467836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.290 [2024-12-10 04:13:44.467866] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.290 [2024-12-10 04:13:44.480096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.290 [2024-12-10 04:13:44.480534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-12-10 04:13:44.480568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.290 [2024-12-10 04:13:44.480585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.290 [2024-12-10 04:13:44.480801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.290 [2024-12-10 04:13:44.481034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.290 [2024-12-10 04:13:44.481053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.290 [2024-12-10 04:13:44.481066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.290 [2024-12-10 04:13:44.481077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.290 [2024-12-10 04:13:44.493346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.290 [2024-12-10 04:13:44.493754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-12-10 04:13:44.493782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.290 [2024-12-10 04:13:44.493798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.290 [2024-12-10 04:13:44.494042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.290 [2024-12-10 04:13:44.494242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.290 [2024-12-10 04:13:44.494261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.290 [2024-12-10 04:13:44.494273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.290 [2024-12-10 04:13:44.494285] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.290 [2024-12-10 04:13:44.506716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.290 [2024-12-10 04:13:44.507139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-12-10 04:13:44.507168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.290 [2024-12-10 04:13:44.507185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.291 [2024-12-10 04:13:44.507416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.291 [2024-12-10 04:13:44.507659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.291 [2024-12-10 04:13:44.507680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.291 [2024-12-10 04:13:44.507692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.291 [2024-12-10 04:13:44.507709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.291 [2024-12-10 04:13:44.520098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.291 [2024-12-10 04:13:44.520477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-12-10 04:13:44.520506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.291 [2024-12-10 04:13:44.520522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.291 [2024-12-10 04:13:44.520747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.291 [2024-12-10 04:13:44.520980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.291 [2024-12-10 04:13:44.521000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.291 [2024-12-10 04:13:44.521013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.291 [2024-12-10 04:13:44.521026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.291 [2024-12-10 04:13:44.533680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.291 [2024-12-10 04:13:44.534070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-12-10 04:13:44.534099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.291 [2024-12-10 04:13:44.534115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.291 [2024-12-10 04:13:44.534346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.291 [2024-12-10 04:13:44.534609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.291 [2024-12-10 04:13:44.534630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.291 [2024-12-10 04:13:44.534643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.291 [2024-12-10 04:13:44.534656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.291 [2024-12-10 04:13:44.547081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.291 [2024-12-10 04:13:44.547429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-12-10 04:13:44.547457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.291 [2024-12-10 04:13:44.547473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.291 [2024-12-10 04:13:44.547714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.291 [2024-12-10 04:13:44.547955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.291 [2024-12-10 04:13:44.547974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.291 [2024-12-10 04:13:44.547987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.291 [2024-12-10 04:13:44.547999] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.291 [2024-12-10 04:13:44.560463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.291 [2024-12-10 04:13:44.560931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-12-10 04:13:44.560964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.291 [2024-12-10 04:13:44.560980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.291 [2024-12-10 04:13:44.561202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.291 [2024-12-10 04:13:44.561401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.291 [2024-12-10 04:13:44.561420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.291 [2024-12-10 04:13:44.561432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.291 [2024-12-10 04:13:44.561444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.291 [2024-12-10 04:13:44.573897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.291 [2024-12-10 04:13:44.574276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-12-10 04:13:44.574318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.291 [2024-12-10 04:13:44.574334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.291 [2024-12-10 04:13:44.574604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.291 [2024-12-10 04:13:44.574839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.291 [2024-12-10 04:13:44.574859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.291 [2024-12-10 04:13:44.574872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.291 [2024-12-10 04:13:44.574899] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.291 [2024-12-10 04:13:44.587325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.291 [2024-12-10 04:13:44.587761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-12-10 04:13:44.587789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.291 [2024-12-10 04:13:44.587818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.291 [2024-12-10 04:13:44.588065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.291 [2024-12-10 04:13:44.588272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.291 [2024-12-10 04:13:44.588291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.291 [2024-12-10 04:13:44.588303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.291 [2024-12-10 04:13:44.588315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.291 [2024-12-10 04:13:44.600596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.291 [2024-12-10 04:13:44.600961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-12-10 04:13:44.600988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.291 [2024-12-10 04:13:44.601003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.291 [2024-12-10 04:13:44.601231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.291 [2024-12-10 04:13:44.601449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.291 [2024-12-10 04:13:44.601468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.291 [2024-12-10 04:13:44.601480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.291 [2024-12-10 04:13:44.601491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.291 [2024-12-10 04:13:44.613960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.291 [2024-12-10 04:13:44.614363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-12-10 04:13:44.614405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.291 [2024-12-10 04:13:44.614421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.291 [2024-12-10 04:13:44.614661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.291 [2024-12-10 04:13:44.614888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.291 [2024-12-10 04:13:44.614911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.291 [2024-12-10 04:13:44.614923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.291 [2024-12-10 04:13:44.614935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.291 [2024-12-10 04:13:44.627350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.291 [2024-12-10 04:13:44.627751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-12-10 04:13:44.627780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.291 [2024-12-10 04:13:44.627796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.291 [2024-12-10 04:13:44.628039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.291 [2024-12-10 04:13:44.628239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.291 [2024-12-10 04:13:44.628258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.291 [2024-12-10 04:13:44.628269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.291 [2024-12-10 04:13:44.628281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.291 [2024-12-10 04:13:44.640649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.291 [2024-12-10 04:13:44.641045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-12-10 04:13:44.641088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.291 [2024-12-10 04:13:44.641103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.291 [2024-12-10 04:13:44.641374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.291 [2024-12-10 04:13:44.641602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.292 [2024-12-10 04:13:44.641631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.292 [2024-12-10 04:13:44.641644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.292 [2024-12-10 04:13:44.641656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.292 [2024-12-10 04:13:44.654040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.292 [2024-12-10 04:13:44.654478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-12-10 04:13:44.654506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.292 [2024-12-10 04:13:44.654522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.292 [2024-12-10 04:13:44.654760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.292 [2024-12-10 04:13:44.654982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.292 [2024-12-10 04:13:44.655001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.292 [2024-12-10 04:13:44.655013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.292 [2024-12-10 04:13:44.655025] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.292 [2024-12-10 04:13:44.667439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.292 [2024-12-10 04:13:44.667803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-12-10 04:13:44.667832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.292 [2024-12-10 04:13:44.667848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.292 [2024-12-10 04:13:44.668090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.292 [2024-12-10 04:13:44.668327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.292 [2024-12-10 04:13:44.668347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.292 [2024-12-10 04:13:44.668359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.292 [2024-12-10 04:13:44.668371] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.551 [2024-12-10 04:13:44.680878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.552 [2024-12-10 04:13:44.681248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.552 [2024-12-10 04:13:44.681278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.552 [2024-12-10 04:13:44.681294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.552 [2024-12-10 04:13:44.681526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.552 [2024-12-10 04:13:44.681770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.552 [2024-12-10 04:13:44.681792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.552 [2024-12-10 04:13:44.681806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.552 [2024-12-10 04:13:44.681838] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.552 [2024-12-10 04:13:44.694267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.552 [2024-12-10 04:13:44.694619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.552 [2024-12-10 04:13:44.694647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.552 [2024-12-10 04:13:44.694664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.552 [2024-12-10 04:13:44.694894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.552 [2024-12-10 04:13:44.695109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.552 [2024-12-10 04:13:44.695128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.552 [2024-12-10 04:13:44.695140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.552 [2024-12-10 04:13:44.695152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.552 [2024-12-10 04:13:44.707724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.552 [2024-12-10 04:13:44.708115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.552 [2024-12-10 04:13:44.708157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.552 [2024-12-10 04:13:44.708173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.552 [2024-12-10 04:13:44.708422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.552 [2024-12-10 04:13:44.708664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.552 [2024-12-10 04:13:44.708685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.552 [2024-12-10 04:13:44.708698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.552 [2024-12-10 04:13:44.708710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.552 [2024-12-10 04:13:44.720928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.552 [2024-12-10 04:13:44.721265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.552 [2024-12-10 04:13:44.721293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.552 [2024-12-10 04:13:44.721309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.552 [2024-12-10 04:13:44.721534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.552 [2024-12-10 04:13:44.721775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.552 [2024-12-10 04:13:44.721794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.552 [2024-12-10 04:13:44.721806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.552 [2024-12-10 04:13:44.721818] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.552 [2024-12-10 04:13:44.734074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.552 [2024-12-10 04:13:44.734458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.552 [2024-12-10 04:13:44.734504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.552 [2024-12-10 04:13:44.734521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.552 [2024-12-10 04:13:44.734775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.552 [2024-12-10 04:13:44.735005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.552 [2024-12-10 04:13:44.735024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.552 [2024-12-10 04:13:44.735036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.552 [2024-12-10 04:13:44.735047] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.552 [2024-12-10 04:13:44.747097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.552 [2024-12-10 04:13:44.747526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.552 [2024-12-10 04:13:44.747575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.552 [2024-12-10 04:13:44.747592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.552 [2024-12-10 04:13:44.747859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.552 [2024-12-10 04:13:44.748069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.552 [2024-12-10 04:13:44.748088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.552 [2024-12-10 04:13:44.748100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.552 [2024-12-10 04:13:44.748111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.552 [2024-12-10 04:13:44.760193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.552 [2024-12-10 04:13:44.760604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.552 [2024-12-10 04:13:44.760631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.552 [2024-12-10 04:13:44.760660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.552 [2024-12-10 04:13:44.760897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.552 [2024-12-10 04:13:44.761091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.552 [2024-12-10 04:13:44.761109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.552 [2024-12-10 04:13:44.761120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.552 [2024-12-10 04:13:44.761132] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.552 [2024-12-10 04:13:44.773221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.552 [2024-12-10 04:13:44.773598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.552 [2024-12-10 04:13:44.773627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.552 [2024-12-10 04:13:44.773643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.552 [2024-12-10 04:13:44.773864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.552 [2024-12-10 04:13:44.774088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.552 [2024-12-10 04:13:44.774107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.552 [2024-12-10 04:13:44.774119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.552 [2024-12-10 04:13:44.774131] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.552 [2024-12-10 04:13:44.786634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.552 [2024-12-10 04:13:44.787115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.552 [2024-12-10 04:13:44.787168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.552 [2024-12-10 04:13:44.787183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.552 [2024-12-10 04:13:44.787446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.552 [2024-12-10 04:13:44.787674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.552 [2024-12-10 04:13:44.787695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.552 [2024-12-10 04:13:44.787708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.552 [2024-12-10 04:13:44.787720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.552 [2024-12-10 04:13:44.799932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.552 [2024-12-10 04:13:44.800372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.552 [2024-12-10 04:13:44.800415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.552 [2024-12-10 04:13:44.800432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.552 [2024-12-10 04:13:44.800675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.552 [2024-12-10 04:13:44.800927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.552 [2024-12-10 04:13:44.800945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.552 [2024-12-10 04:13:44.800958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.552 [2024-12-10 04:13:44.800969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.552 [2024-12-10 04:13:44.812989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.553 [2024-12-10 04:13:44.813309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.553 [2024-12-10 04:13:44.813336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.553 [2024-12-10 04:13:44.813351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.553 [2024-12-10 04:13:44.813577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.553 [2024-12-10 04:13:44.813792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.553 [2024-12-10 04:13:44.813815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.553 [2024-12-10 04:13:44.813828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.553 [2024-12-10 04:13:44.813841] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.553 [2024-12-10 04:13:44.826057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.553 [2024-12-10 04:13:44.826465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.553 [2024-12-10 04:13:44.826530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.553 [2024-12-10 04:13:44.826555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.553 [2024-12-10 04:13:44.826816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.553 [2024-12-10 04:13:44.827043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.553 [2024-12-10 04:13:44.827061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.553 [2024-12-10 04:13:44.827073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.553 [2024-12-10 04:13:44.827084] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.553 [2024-12-10 04:13:44.839318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.553 [2024-12-10 04:13:44.839742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.553 [2024-12-10 04:13:44.839771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.553 [2024-12-10 04:13:44.839787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.553 [2024-12-10 04:13:44.840028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.553 [2024-12-10 04:13:44.840239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.553 [2024-12-10 04:13:44.840257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.553 [2024-12-10 04:13:44.840269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.553 [2024-12-10 04:13:44.840281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.553 [2024-12-10 04:13:44.852668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.553 [2024-12-10 04:13:44.853113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.553 [2024-12-10 04:13:44.853155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.553 [2024-12-10 04:13:44.853171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.553 [2024-12-10 04:13:44.853413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.553 [2024-12-10 04:13:44.853652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.553 [2024-12-10 04:13:44.853672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.553 [2024-12-10 04:13:44.853685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.553 [2024-12-10 04:13:44.853702] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.553 [2024-12-10 04:13:44.865939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.553 [2024-12-10 04:13:44.866431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.553 [2024-12-10 04:13:44.866471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.553 [2024-12-10 04:13:44.866487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.553 [2024-12-10 04:13:44.866749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.553 [2024-12-10 04:13:44.866981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.553 [2024-12-10 04:13:44.866999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.553 [2024-12-10 04:13:44.867012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.553 [2024-12-10 04:13:44.867023] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.553 [2024-12-10 04:13:44.879127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.553 [2024-12-10 04:13:44.879559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.553 [2024-12-10 04:13:44.879613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.553 [2024-12-10 04:13:44.879628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.553 [2024-12-10 04:13:44.879876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.553 [2024-12-10 04:13:44.880076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.553 [2024-12-10 04:13:44.880094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.553 [2024-12-10 04:13:44.880106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.553 [2024-12-10 04:13:44.880118] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.553 [2024-12-10 04:13:44.892453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.553 [2024-12-10 04:13:44.892896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.553 [2024-12-10 04:13:44.892938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.553 [2024-12-10 04:13:44.892955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.553 [2024-12-10 04:13:44.893198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.553 [2024-12-10 04:13:44.893392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.553 [2024-12-10 04:13:44.893411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.553 [2024-12-10 04:13:44.893422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.553 [2024-12-10 04:13:44.893434] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.553 [2024-12-10 04:13:44.905754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.553 [2024-12-10 04:13:44.906265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.553 [2024-12-10 04:13:44.906311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.553 [2024-12-10 04:13:44.906328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.553 [2024-12-10 04:13:44.906583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.553 [2024-12-10 04:13:44.906798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.553 [2024-12-10 04:13:44.906818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.553 [2024-12-10 04:13:44.906831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.553 [2024-12-10 04:13:44.906842] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.553 [2024-12-10 04:13:44.919124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.553 [2024-12-10 04:13:44.919528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.553 [2024-12-10 04:13:44.919619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.553 [2024-12-10 04:13:44.919636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.553 [2024-12-10 04:13:44.919892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.553 [2024-12-10 04:13:44.920102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.553 [2024-12-10 04:13:44.920121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.553 [2024-12-10 04:13:44.920133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.553 [2024-12-10 04:13:44.920144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.553 [2024-12-10 04:13:44.932422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.553 [2024-12-10 04:13:44.932953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.553 [2024-12-10 04:13:44.933004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.553 [2024-12-10 04:13:44.933020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.815 [2024-12-10 04:13:44.933268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.815 [2024-12-10 04:13:44.933467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.815 [2024-12-10 04:13:44.933486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.815 [2024-12-10 04:13:44.933499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.815 [2024-12-10 04:13:44.933510] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.815 [2024-12-10 04:13:44.945711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.815 [2024-12-10 04:13:44.946072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.815 [2024-12-10 04:13:44.946100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.815 [2024-12-10 04:13:44.946116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.815 [2024-12-10 04:13:44.946347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.815 [2024-12-10 04:13:44.946606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.815 [2024-12-10 04:13:44.946627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.815 [2024-12-10 04:13:44.946640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.815 [2024-12-10 04:13:44.946653] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.815 [2024-12-10 04:13:44.959090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.815 [2024-12-10 04:13:44.959501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.815 [2024-12-10 04:13:44.959528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.815 [2024-12-10 04:13:44.959566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.815 [2024-12-10 04:13:44.959812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.815 [2024-12-10 04:13:44.960031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.815 [2024-12-10 04:13:44.960050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.815 [2024-12-10 04:13:44.960062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.815 [2024-12-10 04:13:44.960073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.815 [2024-12-10 04:13:44.972379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.815 [2024-12-10 04:13:44.972774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.815 [2024-12-10 04:13:44.972817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.815 [2024-12-10 04:13:44.972833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.815 [2024-12-10 04:13:44.973071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.815 [2024-12-10 04:13:44.973282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.815 [2024-12-10 04:13:44.973300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.815 [2024-12-10 04:13:44.973312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.815 [2024-12-10 04:13:44.973323] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.815 [2024-12-10 04:13:44.985503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.815 [2024-12-10 04:13:44.986010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.815 [2024-12-10 04:13:44.986062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.815 [2024-12-10 04:13:44.986078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.815 [2024-12-10 04:13:44.986342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.815 [2024-12-10 04:13:44.986536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.815 [2024-12-10 04:13:44.986583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.815 [2024-12-10 04:13:44.986598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.816 [2024-12-10 04:13:44.986610] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.816 [2024-12-10 04:13:44.998705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.816 [2024-12-10 04:13:44.999200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.816 [2024-12-10 04:13:44.999251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.816 [2024-12-10 04:13:44.999266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.816 [2024-12-10 04:13:44.999529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.816 [2024-12-10 04:13:44.999754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.816 [2024-12-10 04:13:44.999774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.816 [2024-12-10 04:13:44.999786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.816 [2024-12-10 04:13:44.999797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.816 [2024-12-10 04:13:45.011738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.816 [2024-12-10 04:13:45.012175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.816 [2024-12-10 04:13:45.012218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.816 [2024-12-10 04:13:45.012234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.816 [2024-12-10 04:13:45.012476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.816 [2024-12-10 04:13:45.012734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.816 [2024-12-10 04:13:45.012754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.816 [2024-12-10 04:13:45.012767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.816 [2024-12-10 04:13:45.012779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.816 5309.00 IOPS, 20.74 MiB/s [2024-12-10T03:13:45.205Z] [2024-12-10 04:13:45.026299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.816 [2024-12-10 04:13:45.026696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.816 [2024-12-10 04:13:45.026725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.816 [2024-12-10 04:13:45.026741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.816 [2024-12-10 04:13:45.026971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.816 [2024-12-10 04:13:45.027193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.816 [2024-12-10 04:13:45.027212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.816 [2024-12-10 04:13:45.027241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.816 [2024-12-10 04:13:45.027257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.816 [2024-12-10 04:13:45.039702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.816 [2024-12-10 04:13:45.040106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.816 [2024-12-10 04:13:45.040147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.816 [2024-12-10 04:13:45.040161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.816 [2024-12-10 04:13:45.040405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.816 [2024-12-10 04:13:45.040632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.816 [2024-12-10 04:13:45.040653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.816 [2024-12-10 04:13:45.040666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.816 [2024-12-10 04:13:45.040678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.816 [2024-12-10 04:13:45.053119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.816 [2024-12-10 04:13:45.053520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.816 [2024-12-10 04:13:45.053559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.816 [2024-12-10 04:13:45.053592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.816 [2024-12-10 04:13:45.053838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.816 [2024-12-10 04:13:45.054048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.816 [2024-12-10 04:13:45.054066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.816 [2024-12-10 04:13:45.054079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.816 [2024-12-10 04:13:45.054090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.816 [2024-12-10 04:13:45.066388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.816 [2024-12-10 04:13:45.066835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.816 [2024-12-10 04:13:45.066898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.816 [2024-12-10 04:13:45.066913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.816 [2024-12-10 04:13:45.067156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.816 [2024-12-10 04:13:45.067350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.816 [2024-12-10 04:13:45.067368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.816 [2024-12-10 04:13:45.067380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.816 [2024-12-10 04:13:45.067391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.816 [2024-12-10 04:13:45.079604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.816 [2024-12-10 04:13:45.080040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.816 [2024-12-10 04:13:45.080068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.816 [2024-12-10 04:13:45.080098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.816 [2024-12-10 04:13:45.080339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.816 [2024-12-10 04:13:45.080573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.816 [2024-12-10 04:13:45.080596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.816 [2024-12-10 04:13:45.080609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.816 [2024-12-10 04:13:45.080620] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.816 [2024-12-10 04:13:45.092833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.816 [2024-12-10 04:13:45.093190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.816 [2024-12-10 04:13:45.093217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.816 [2024-12-10 04:13:45.093233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.816 [2024-12-10 04:13:45.093455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.816 [2024-12-10 04:13:45.093714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.816 [2024-12-10 04:13:45.093735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.816 [2024-12-10 04:13:45.093747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.816 [2024-12-10 04:13:45.093759] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.816 [2024-12-10 04:13:45.105914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.816 [2024-12-10 04:13:45.106278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.816 [2024-12-10 04:13:45.106305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.816 [2024-12-10 04:13:45.106321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.816 [2024-12-10 04:13:45.106568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.816 [2024-12-10 04:13:45.106768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.816 [2024-12-10 04:13:45.106787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.816 [2024-12-10 04:13:45.106800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.816 [2024-12-10 04:13:45.106811] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.816 [2024-12-10 04:13:45.119034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.816 [2024-12-10 04:13:45.119410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.816 [2024-12-10 04:13:45.119453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.816 [2024-12-10 04:13:45.119469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.816 [2024-12-10 04:13:45.119740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.816 [2024-12-10 04:13:45.119955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.816 [2024-12-10 04:13:45.119973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.816 [2024-12-10 04:13:45.119985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.816 [2024-12-10 04:13:45.119997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.817 [2024-12-10 04:13:45.132260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.817 [2024-12-10 04:13:45.132689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.817 [2024-12-10 04:13:45.132731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.817 [2024-12-10 04:13:45.132749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.817 [2024-12-10 04:13:45.132990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.817 [2024-12-10 04:13:45.133219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.817 [2024-12-10 04:13:45.133238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.817 [2024-12-10 04:13:45.133250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.817 [2024-12-10 04:13:45.133263] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.817 [2024-12-10 04:13:45.145369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.817 [2024-12-10 04:13:45.145745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.817 [2024-12-10 04:13:45.145788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.817 [2024-12-10 04:13:45.145803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.817 [2024-12-10 04:13:45.146057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.817 [2024-12-10 04:13:45.146266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.817 [2024-12-10 04:13:45.146284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.817 [2024-12-10 04:13:45.146296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.817 [2024-12-10 04:13:45.146307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.817 [2024-12-10 04:13:45.158578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.817 [2024-12-10 04:13:45.159003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.817 [2024-12-10 04:13:45.159044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.817 [2024-12-10 04:13:45.159060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.817 [2024-12-10 04:13:45.159282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.817 [2024-12-10 04:13:45.159493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.817 [2024-12-10 04:13:45.159515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.817 [2024-12-10 04:13:45.159528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.817 [2024-12-10 04:13:45.159539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.817 [2024-12-10 04:13:45.171713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.817 [2024-12-10 04:13:45.172203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.817 [2024-12-10 04:13:45.172243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.817 [2024-12-10 04:13:45.172259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.817 [2024-12-10 04:13:45.172506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.817 [2024-12-10 04:13:45.172747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.817 [2024-12-10 04:13:45.172767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.817 [2024-12-10 04:13:45.172780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.817 [2024-12-10 04:13:45.172792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.817 [2024-12-10 04:13:45.184889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.817 [2024-12-10 04:13:45.185214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.817 [2024-12-10 04:13:45.185256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:50.817 [2024-12-10 04:13:45.185271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:50.817 [2024-12-10 04:13:45.185504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:50.817 [2024-12-10 04:13:45.185745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.817 [2024-12-10 04:13:45.185765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.817 [2024-12-10 04:13:45.185779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.817 [2024-12-10 04:13:45.185790] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.079 [2024-12-10 04:13:45.198108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.079 [2024-12-10 04:13:45.198597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.079 [2024-12-10 04:13:45.198626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.079 [2024-12-10 04:13:45.198657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.079 [2024-12-10 04:13:45.198912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.079 [2024-12-10 04:13:45.199106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.079 [2024-12-10 04:13:45.199124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.079 [2024-12-10 04:13:45.199136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.079 [2024-12-10 04:13:45.199153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.079 [2024-12-10 04:13:45.211217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.079 [2024-12-10 04:13:45.211561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.079 [2024-12-10 04:13:45.211590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.079 [2024-12-10 04:13:45.211621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.079 [2024-12-10 04:13:45.211868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.079 [2024-12-10 04:13:45.212080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.079 [2024-12-10 04:13:45.212098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.079 [2024-12-10 04:13:45.212110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.079 [2024-12-10 04:13:45.212122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.079 [2024-12-10 04:13:45.224276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.079 [2024-12-10 04:13:45.224654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.079 [2024-12-10 04:13:45.224697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.079 [2024-12-10 04:13:45.224712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.079 [2024-12-10 04:13:45.224960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.079 [2024-12-10 04:13:45.225169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.079 [2024-12-10 04:13:45.225187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.079 [2024-12-10 04:13:45.225199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.079 [2024-12-10 04:13:45.225211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.079 [2024-12-10 04:13:45.237406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.079 [2024-12-10 04:13:45.237799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.079 [2024-12-10 04:13:45.237840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.079 [2024-12-10 04:13:45.237855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.079 [2024-12-10 04:13:45.238079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.079 [2024-12-10 04:13:45.238289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.079 [2024-12-10 04:13:45.238307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.079 [2024-12-10 04:13:45.238319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.079 [2024-12-10 04:13:45.238331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.080 [2024-12-10 04:13:45.250608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.080 [2024-12-10 04:13:45.250978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.080 [2024-12-10 04:13:45.251020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.080 [2024-12-10 04:13:45.251035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.080 [2024-12-10 04:13:45.251284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.080 [2024-12-10 04:13:45.251478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.080 [2024-12-10 04:13:45.251496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.080 [2024-12-10 04:13:45.251507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.080 [2024-12-10 04:13:45.251519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.080 [2024-12-10 04:13:45.263853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.080 [2024-12-10 04:13:45.264263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.080 [2024-12-10 04:13:45.264303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.080 [2024-12-10 04:13:45.264319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.080 [2024-12-10 04:13:45.264540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.080 [2024-12-10 04:13:45.264779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.080 [2024-12-10 04:13:45.264798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.080 [2024-12-10 04:13:45.264810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.080 [2024-12-10 04:13:45.264821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.080 [2024-12-10 04:13:45.276881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.080 [2024-12-10 04:13:45.277331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.080 [2024-12-10 04:13:45.277359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.080 [2024-12-10 04:13:45.277375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.080 [2024-12-10 04:13:45.277601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.080 [2024-12-10 04:13:45.277821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.080 [2024-12-10 04:13:45.277841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.080 [2024-12-10 04:13:45.277854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.080 [2024-12-10 04:13:45.277867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.080 [2024-12-10 04:13:45.290264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.080 [2024-12-10 04:13:45.290678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.080 [2024-12-10 04:13:45.290707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.080 [2024-12-10 04:13:45.290723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.080 [2024-12-10 04:13:45.290979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.080 [2024-12-10 04:13:45.291173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.080 [2024-12-10 04:13:45.291191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.080 [2024-12-10 04:13:45.291203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.080 [2024-12-10 04:13:45.291214] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.080 [2024-12-10 04:13:45.303328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.080 [2024-12-10 04:13:45.303702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.080 [2024-12-10 04:13:45.303744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.080 [2024-12-10 04:13:45.303760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.080 [2024-12-10 04:13:45.304028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.080 [2024-12-10 04:13:45.304223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.080 [2024-12-10 04:13:45.304241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.080 [2024-12-10 04:13:45.304253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.080 [2024-12-10 04:13:45.304264] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.080 [2024-12-10 04:13:45.316418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.080 [2024-12-10 04:13:45.316846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.080 [2024-12-10 04:13:45.316874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.080 [2024-12-10 04:13:45.316891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.080 [2024-12-10 04:13:45.317126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.080 [2024-12-10 04:13:45.317357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.080 [2024-12-10 04:13:45.317375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.080 [2024-12-10 04:13:45.317387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.080 [2024-12-10 04:13:45.317398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.080 [2024-12-10 04:13:45.329677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.080 [2024-12-10 04:13:45.330124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.080 [2024-12-10 04:13:45.330166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.080 [2024-12-10 04:13:45.330182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.080 [2024-12-10 04:13:45.330424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.080 [2024-12-10 04:13:45.330679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.080 [2024-12-10 04:13:45.330705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.080 [2024-12-10 04:13:45.330718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.080 [2024-12-10 04:13:45.330730] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.080 [2024-12-10 04:13:45.342815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.080 [2024-12-10 04:13:45.343305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.080 [2024-12-10 04:13:45.343347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.080 [2024-12-10 04:13:45.343363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.080 [2024-12-10 04:13:45.343642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.080 [2024-12-10 04:13:45.343849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.080 [2024-12-10 04:13:45.343868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.080 [2024-12-10 04:13:45.343881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.080 [2024-12-10 04:13:45.343893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.080 [2024-12-10 04:13:45.356029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.080 [2024-12-10 04:13:45.356522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.080 [2024-12-10 04:13:45.356571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.080 [2024-12-10 04:13:45.356590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.080 [2024-12-10 04:13:45.356821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.080 [2024-12-10 04:13:45.357051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.080 [2024-12-10 04:13:45.357069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.080 [2024-12-10 04:13:45.357081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.080 [2024-12-10 04:13:45.357092] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.080 [2024-12-10 04:13:45.369200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.080 [2024-12-10 04:13:45.369630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.080 [2024-12-10 04:13:45.369657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.080 [2024-12-10 04:13:45.369687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.080 [2024-12-10 04:13:45.369928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.080 [2024-12-10 04:13:45.370140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.080 [2024-12-10 04:13:45.370158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.080 [2024-12-10 04:13:45.370171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.080 [2024-12-10 04:13:45.370186] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.080 [2024-12-10 04:13:45.382225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.080 [2024-12-10 04:13:45.382626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.081 [2024-12-10 04:13:45.382653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.081 [2024-12-10 04:13:45.382669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.081 [2024-12-10 04:13:45.382892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.081 [2024-12-10 04:13:45.383102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.081 [2024-12-10 04:13:45.383120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.081 [2024-12-10 04:13:45.383131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.081 [2024-12-10 04:13:45.383143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.081 [2024-12-10 04:13:45.395231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.081 [2024-12-10 04:13:45.395659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.081 [2024-12-10 04:13:45.395686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.081 [2024-12-10 04:13:45.395701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.081 [2024-12-10 04:13:45.395936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.081 [2024-12-10 04:13:45.396151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.081 [2024-12-10 04:13:45.396169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.081 [2024-12-10 04:13:45.396181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.081 [2024-12-10 04:13:45.396193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.081 [2024-12-10 04:13:45.408464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.081 [2024-12-10 04:13:45.408912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.081 [2024-12-10 04:13:45.408939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.081 [2024-12-10 04:13:45.408969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.081 [2024-12-10 04:13:45.409208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.081 [2024-12-10 04:13:45.409402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.081 [2024-12-10 04:13:45.409420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.081 [2024-12-10 04:13:45.409432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.081 [2024-12-10 04:13:45.409443] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.081 [2024-12-10 04:13:45.421626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.081 [2024-12-10 04:13:45.422017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.081 [2024-12-10 04:13:45.422062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.081 [2024-12-10 04:13:45.422079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.081 [2024-12-10 04:13:45.422302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.081 [2024-12-10 04:13:45.422512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.081 [2024-12-10 04:13:45.422530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.081 [2024-12-10 04:13:45.422542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.081 [2024-12-10 04:13:45.422580] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.081 [2024-12-10 04:13:45.434823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.081 [2024-12-10 04:13:45.435189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.081 [2024-12-10 04:13:45.435232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.081 [2024-12-10 04:13:45.435247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.081 [2024-12-10 04:13:45.435499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.081 [2024-12-10 04:13:45.435747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.081 [2024-12-10 04:13:45.435768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.081 [2024-12-10 04:13:45.435782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.081 [2024-12-10 04:13:45.435794] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.081 [2024-12-10 04:13:45.447880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.081 [2024-12-10 04:13:45.448369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.081 [2024-12-10 04:13:45.448409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.081 [2024-12-10 04:13:45.448426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.081 [2024-12-10 04:13:45.448692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.081 [2024-12-10 04:13:45.448915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.081 [2024-12-10 04:13:45.448947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.081 [2024-12-10 04:13:45.448958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.081 [2024-12-10 04:13:45.448970] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.341 [2024-12-10 04:13:45.461083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.341 [2024-12-10 04:13:45.461520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.341 [2024-12-10 04:13:45.461574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.341 [2024-12-10 04:13:45.461593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.341 [2024-12-10 04:13:45.461861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.341 [2024-12-10 04:13:45.462057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.341 [2024-12-10 04:13:45.462075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.341 [2024-12-10 04:13:45.462087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.341 [2024-12-10 04:13:45.462098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.341 [2024-12-10 04:13:45.474194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.341 [2024-12-10 04:13:45.474563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.341 [2024-12-10 04:13:45.474608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.341 [2024-12-10 04:13:45.474624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.341 [2024-12-10 04:13:45.474893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.341 [2024-12-10 04:13:45.475087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.341 [2024-12-10 04:13:45.475106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.341 [2024-12-10 04:13:45.475118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.342 [2024-12-10 04:13:45.475129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.342 [2024-12-10 04:13:45.487266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.342 [2024-12-10 04:13:45.487629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.342 [2024-12-10 04:13:45.487672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.342 [2024-12-10 04:13:45.487687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.342 [2024-12-10 04:13:45.487937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.342 [2024-12-10 04:13:45.488131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.342 [2024-12-10 04:13:45.488149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.342 [2024-12-10 04:13:45.488161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.342 [2024-12-10 04:13:45.488172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.342 [2024-12-10 04:13:45.500433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.342 [2024-12-10 04:13:45.500832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.342 [2024-12-10 04:13:45.500875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.342 [2024-12-10 04:13:45.500891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.342 [2024-12-10 04:13:45.501140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.342 [2024-12-10 04:13:45.501333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.342 [2024-12-10 04:13:45.501356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.342 [2024-12-10 04:13:45.501369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.342 [2024-12-10 04:13:45.501381] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.342 [2024-12-10 04:13:45.513583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.342 [2024-12-10 04:13:45.514008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.342 [2024-12-10 04:13:45.514035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.342 [2024-12-10 04:13:45.514051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.342 [2024-12-10 04:13:45.514272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.342 [2024-12-10 04:13:45.514480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.342 [2024-12-10 04:13:45.514498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.342 [2024-12-10 04:13:45.514510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.342 [2024-12-10 04:13:45.514521] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.342 [2024-12-10 04:13:45.526690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.342 [2024-12-10 04:13:45.527066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.342 [2024-12-10 04:13:45.527107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.342 [2024-12-10 04:13:45.527122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.342 [2024-12-10 04:13:45.527377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.342 [2024-12-10 04:13:45.527612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.342 [2024-12-10 04:13:45.527648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.342 [2024-12-10 04:13:45.527662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.342 [2024-12-10 04:13:45.527675] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.342 [2024-12-10 04:13:45.539910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.342 [2024-12-10 04:13:45.540278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.342 [2024-12-10 04:13:45.540306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.342 [2024-12-10 04:13:45.540322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.342 [2024-12-10 04:13:45.540567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.342 [2024-12-10 04:13:45.540790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.342 [2024-12-10 04:13:45.540809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.342 [2024-12-10 04:13:45.540822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.342 [2024-12-10 04:13:45.540852] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.342 [2024-12-10 04:13:45.553233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.342 [2024-12-10 04:13:45.553642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.342 [2024-12-10 04:13:45.553668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.342 [2024-12-10 04:13:45.553699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.342 [2024-12-10 04:13:45.553927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.342 [2024-12-10 04:13:45.554153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.342 [2024-12-10 04:13:45.554171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.342 [2024-12-10 04:13:45.554184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.342 [2024-12-10 04:13:45.554195] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.342 [2024-12-10 04:13:45.566285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.342 [2024-12-10 04:13:45.566651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.342 [2024-12-10 04:13:45.566695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.342 [2024-12-10 04:13:45.566711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.342 [2024-12-10 04:13:45.566964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.342 [2024-12-10 04:13:45.567173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.342 [2024-12-10 04:13:45.567191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.342 [2024-12-10 04:13:45.567202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.342 [2024-12-10 04:13:45.567214] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.342 [2024-12-10 04:13:45.579354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.342 [2024-12-10 04:13:45.579741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.342 [2024-12-10 04:13:45.579768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.342 [2024-12-10 04:13:45.579798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.342 [2024-12-10 04:13:45.580008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.342 [2024-12-10 04:13:45.580218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.342 [2024-12-10 04:13:45.580235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.342 [2024-12-10 04:13:45.580247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.342 [2024-12-10 04:13:45.580259] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.342 [2024-12-10 04:13:45.592366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.342 [2024-12-10 04:13:45.592741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.342 [2024-12-10 04:13:45.592788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.342 [2024-12-10 04:13:45.592804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.342 [2024-12-10 04:13:45.593072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.342 [2024-12-10 04:13:45.593267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.342 [2024-12-10 04:13:45.593285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.342 [2024-12-10 04:13:45.593296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.342 [2024-12-10 04:13:45.593308] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.342 [2024-12-10 04:13:45.605553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.342 [2024-12-10 04:13:45.605889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.342 [2024-12-10 04:13:45.605916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.342 [2024-12-10 04:13:45.605932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.342 [2024-12-10 04:13:45.606172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.342 [2024-12-10 04:13:45.606388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.342 [2024-12-10 04:13:45.606421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.342 [2024-12-10 04:13:45.606433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.342 [2024-12-10 04:13:45.606444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.343 [2024-12-10 04:13:45.618660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.343 [2024-12-10 04:13:45.619089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.343 [2024-12-10 04:13:45.619130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.343 [2024-12-10 04:13:45.619147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.343 [2024-12-10 04:13:45.619388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.343 [2024-12-10 04:13:45.619629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.343 [2024-12-10 04:13:45.619650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.343 [2024-12-10 04:13:45.619663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.343 [2024-12-10 04:13:45.619675] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.343 [2024-12-10 04:13:45.631894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.343 [2024-12-10 04:13:45.632232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.343 [2024-12-10 04:13:45.632260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.343 [2024-12-10 04:13:45.632276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.343 [2024-12-10 04:13:45.632506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.343 [2024-12-10 04:13:45.632751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.343 [2024-12-10 04:13:45.632772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.343 [2024-12-10 04:13:45.632785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.343 [2024-12-10 04:13:45.632798] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.343 [2024-12-10 04:13:45.645092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.343 [2024-12-10 04:13:45.645427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.343 [2024-12-10 04:13:45.645453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.343 [2024-12-10 04:13:45.645468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.343 [2024-12-10 04:13:45.645701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.343 [2024-12-10 04:13:45.645940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.343 [2024-12-10 04:13:45.645958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.343 [2024-12-10 04:13:45.645970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.343 [2024-12-10 04:13:45.645981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.343 [2024-12-10 04:13:45.658364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.343 [2024-12-10 04:13:45.658755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.343 [2024-12-10 04:13:45.658799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.343 [2024-12-10 04:13:45.658814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.343 [2024-12-10 04:13:45.659080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.343 [2024-12-10 04:13:45.659275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.343 [2024-12-10 04:13:45.659293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.343 [2024-12-10 04:13:45.659305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.343 [2024-12-10 04:13:45.659316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.343 [2024-12-10 04:13:45.671641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.343 [2024-12-10 04:13:45.672130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.343 [2024-12-10 04:13:45.672181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.343 [2024-12-10 04:13:45.672195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.343 [2024-12-10 04:13:45.672439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.343 [2024-12-10 04:13:45.672666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.343 [2024-12-10 04:13:45.672691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.343 [2024-12-10 04:13:45.672704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.343 [2024-12-10 04:13:45.672716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.343 [2024-12-10 04:13:45.684876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.343 [2024-12-10 04:13:45.685353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.343 [2024-12-10 04:13:45.685380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.343 [2024-12-10 04:13:45.685410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.343 [2024-12-10 04:13:45.685679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.343 [2024-12-10 04:13:45.685915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.343 [2024-12-10 04:13:45.685948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.343 [2024-12-10 04:13:45.685960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.343 [2024-12-10 04:13:45.685971] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.343 [2024-12-10 04:13:45.698121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.343 [2024-12-10 04:13:45.698616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.343 [2024-12-10 04:13:45.698658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.343 [2024-12-10 04:13:45.698674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.343 [2024-12-10 04:13:45.698926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.343 [2024-12-10 04:13:45.699135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.343 [2024-12-10 04:13:45.699153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.343 [2024-12-10 04:13:45.699165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.343 [2024-12-10 04:13:45.699176] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.343 [2024-12-10 04:13:45.711560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.343 [2024-12-10 04:13:45.711939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.343 [2024-12-10 04:13:45.711965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.343 [2024-12-10 04:13:45.711980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.343 [2024-12-10 04:13:45.712183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.343 [2024-12-10 04:13:45.712398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.343 [2024-12-10 04:13:45.712417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.343 [2024-12-10 04:13:45.712429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.343 [2024-12-10 04:13:45.712446] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.603 [2024-12-10 04:13:45.725115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.603 [2024-12-10 04:13:45.725499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.603 [2024-12-10 04:13:45.725530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.603 [2024-12-10 04:13:45.725555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.603 [2024-12-10 04:13:45.725774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.603 [2024-12-10 04:13:45.726012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.603 [2024-12-10 04:13:45.726030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.603 [2024-12-10 04:13:45.726043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.603 [2024-12-10 04:13:45.726054] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.603 [2024-12-10 04:13:45.738514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.603 [2024-12-10 04:13:45.738953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.603 [2024-12-10 04:13:45.738996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.603 [2024-12-10 04:13:45.739012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.603 [2024-12-10 04:13:45.739258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.603 [2024-12-10 04:13:45.739452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.603 [2024-12-10 04:13:45.739470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.603 [2024-12-10 04:13:45.739482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.603 [2024-12-10 04:13:45.739494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.603 [2024-12-10 04:13:45.751994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.603 [2024-12-10 04:13:45.752374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.603 [2024-12-10 04:13:45.752402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.603 [2024-12-10 04:13:45.752417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.603 [2024-12-10 04:13:45.752655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.603 [2024-12-10 04:13:45.752886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.603 [2024-12-10 04:13:45.752920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.603 [2024-12-10 04:13:45.752932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.603 [2024-12-10 04:13:45.752944] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.603 [2024-12-10 04:13:45.765188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.603 [2024-12-10 04:13:45.765557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.603 [2024-12-10 04:13:45.765616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.603 [2024-12-10 04:13:45.765633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.603 [2024-12-10 04:13:45.765856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.603 [2024-12-10 04:13:45.766065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.603 [2024-12-10 04:13:45.766083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.603 [2024-12-10 04:13:45.766095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.603 [2024-12-10 04:13:45.766107] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.603 [2024-12-10 04:13:45.778470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.603 [2024-12-10 04:13:45.778932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.603 [2024-12-10 04:13:45.778961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.603 [2024-12-10 04:13:45.778977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.603 [2024-12-10 04:13:45.779224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.603 [2024-12-10 04:13:45.779452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.603 [2024-12-10 04:13:45.779472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.603 [2024-12-10 04:13:45.779485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.603 [2024-12-10 04:13:45.779497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.603 [2024-12-10 04:13:45.791706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.603 [2024-12-10 04:13:45.792166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.603 [2024-12-10 04:13:45.792212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.603 [2024-12-10 04:13:45.792228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.603 [2024-12-10 04:13:45.792496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.603 [2024-12-10 04:13:45.792730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.603 [2024-12-10 04:13:45.792751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.603 [2024-12-10 04:13:45.792764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.603 [2024-12-10 04:13:45.792777] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.603 [2024-12-10 04:13:45.804951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.604 [2024-12-10 04:13:45.805323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.604 [2024-12-10 04:13:45.805369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.604 [2024-12-10 04:13:45.805384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.604 [2024-12-10 04:13:45.805636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.604 [2024-12-10 04:13:45.805843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.604 [2024-12-10 04:13:45.805862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.604 [2024-12-10 04:13:45.805889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.604 [2024-12-10 04:13:45.805901] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.604 [2024-12-10 04:13:45.818220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.604 [2024-12-10 04:13:45.818525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.604 [2024-12-10 04:13:45.818575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.604 [2024-12-10 04:13:45.818592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.604 [2024-12-10 04:13:45.818810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.604 [2024-12-10 04:13:45.819021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.604 [2024-12-10 04:13:45.819039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.604 [2024-12-10 04:13:45.819051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.604 [2024-12-10 04:13:45.819062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.604 [2024-12-10 04:13:45.831459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.604 [2024-12-10 04:13:45.831857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.604 [2024-12-10 04:13:45.831901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.604 [2024-12-10 04:13:45.831916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.604 [2024-12-10 04:13:45.832185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.604 [2024-12-10 04:13:45.832380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.604 [2024-12-10 04:13:45.832398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.604 [2024-12-10 04:13:45.832410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.604 [2024-12-10 04:13:45.832421] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.604 [2024-12-10 04:13:45.844632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.604 [2024-12-10 04:13:45.845131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.604 [2024-12-10 04:13:45.845158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.604 [2024-12-10 04:13:45.845188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.604 [2024-12-10 04:13:45.845420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.604 [2024-12-10 04:13:45.845660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.604 [2024-12-10 04:13:45.845685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.604 [2024-12-10 04:13:45.845698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.604 [2024-12-10 04:13:45.845710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.604 [2024-12-10 04:13:45.857806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.604 [2024-12-10 04:13:45.858202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.604 [2024-12-10 04:13:45.858229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.604 [2024-12-10 04:13:45.858245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.604 [2024-12-10 04:13:45.858467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.604 [2024-12-10 04:13:45.858727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.604 [2024-12-10 04:13:45.858748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.604 [2024-12-10 04:13:45.858761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.604 [2024-12-10 04:13:45.858773] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.604 [2024-12-10 04:13:45.870879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.604 [2024-12-10 04:13:45.871241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.604 [2024-12-10 04:13:45.871268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.604 [2024-12-10 04:13:45.871283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.604 [2024-12-10 04:13:45.871520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.604 [2024-12-10 04:13:45.871751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.604 [2024-12-10 04:13:45.871771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.604 [2024-12-10 04:13:45.871784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.604 [2024-12-10 04:13:45.871796] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.604 [2024-12-10 04:13:45.884106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.604 [2024-12-10 04:13:45.884474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.604 [2024-12-10 04:13:45.884517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.604 [2024-12-10 04:13:45.884533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.604 [2024-12-10 04:13:45.884797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.604 [2024-12-10 04:13:45.885027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.604 [2024-12-10 04:13:45.885045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.604 [2024-12-10 04:13:45.885057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.604 [2024-12-10 04:13:45.885073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.604 [2024-12-10 04:13:45.897242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.604 [2024-12-10 04:13:45.897608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.604 [2024-12-10 04:13:45.897635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.604 [2024-12-10 04:13:45.897651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.604 [2024-12-10 04:13:45.897889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.604 [2024-12-10 04:13:45.898099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.604 [2024-12-10 04:13:45.898117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.604 [2024-12-10 04:13:45.898129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.604 [2024-12-10 04:13:45.898140] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.604 [2024-12-10 04:13:45.910430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.604 [2024-12-10 04:13:45.910768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.604 [2024-12-10 04:13:45.910795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.604 [2024-12-10 04:13:45.910810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.604 [2024-12-10 04:13:45.911034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.604 [2024-12-10 04:13:45.911245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.604 [2024-12-10 04:13:45.911263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.604 [2024-12-10 04:13:45.911275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.604 [2024-12-10 04:13:45.911286] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.604 [2024-12-10 04:13:45.923730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.604 [2024-12-10 04:13:45.924129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.604 [2024-12-10 04:13:45.924154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.604 [2024-12-10 04:13:45.924169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.604 [2024-12-10 04:13:45.924371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.604 [2024-12-10 04:13:45.924624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.604 [2024-12-10 04:13:45.924643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.604 [2024-12-10 04:13:45.924656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.604 [2024-12-10 04:13:45.924667] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.604 [2024-12-10 04:13:45.936763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.604 [2024-12-10 04:13:45.937123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.604 [2024-12-10 04:13:45.937157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.605 [2024-12-10 04:13:45.937173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.605 [2024-12-10 04:13:45.937409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.605 [2024-12-10 04:13:45.937646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.605 [2024-12-10 04:13:45.937665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.605 [2024-12-10 04:13:45.937678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.605 [2024-12-10 04:13:45.937690] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.605 [2024-12-10 04:13:45.949836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.605 [2024-12-10 04:13:45.950337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.605 [2024-12-10 04:13:45.950379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.605 [2024-12-10 04:13:45.950396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.605 [2024-12-10 04:13:45.950678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.605 [2024-12-10 04:13:45.950885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.605 [2024-12-10 04:13:45.950904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.605 [2024-12-10 04:13:45.950916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.605 [2024-12-10 04:13:45.950929] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.605 [2024-12-10 04:13:45.962883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.605 [2024-12-10 04:13:45.963249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.605 [2024-12-10 04:13:45.963277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.605 [2024-12-10 04:13:45.963292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.605 [2024-12-10 04:13:45.963535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.605 [2024-12-10 04:13:45.963767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.605 [2024-12-10 04:13:45.963787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.605 [2024-12-10 04:13:45.963799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.605 [2024-12-10 04:13:45.963811] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.605 [2024-12-10 04:13:45.976060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.605 [2024-12-10 04:13:45.976431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.605 [2024-12-10 04:13:45.976476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.605 [2024-12-10 04:13:45.976492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.605 [2024-12-10 04:13:45.976748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.605 [2024-12-10 04:13:45.976960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.605 [2024-12-10 04:13:45.976979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.605 [2024-12-10 04:13:45.976991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.605 [2024-12-10 04:13:45.977002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.864 [2024-12-10 04:13:45.989393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.864 [2024-12-10 04:13:45.989828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.864 [2024-12-10 04:13:45.989861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.864 [2024-12-10 04:13:45.989894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.864 [2024-12-10 04:13:45.990130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.864 [2024-12-10 04:13:45.990349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.864 [2024-12-10 04:13:45.990368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.864 [2024-12-10 04:13:45.990380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.864 [2024-12-10 04:13:45.990392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.864 [2024-12-10 04:13:46.002631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.864 [2024-12-10 04:13:46.002992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.864 [2024-12-10 04:13:46.003025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.864 [2024-12-10 04:13:46.003058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.864 [2024-12-10 04:13:46.003282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.864 [2024-12-10 04:13:46.003497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.864 [2024-12-10 04:13:46.003516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.864 [2024-12-10 04:13:46.003562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.864 [2024-12-10 04:13:46.003578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.864 [2024-12-10 04:13:46.016044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.864 [2024-12-10 04:13:46.016417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.864 [2024-12-10 04:13:46.016446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.864 [2024-12-10 04:13:46.016462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.864 [2024-12-10 04:13:46.016704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.864 [2024-12-10 04:13:46.016944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.864 [2024-12-10 04:13:46.016968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.864 [2024-12-10 04:13:46.016981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.864 [2024-12-10 04:13:46.016993] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.865 4247.20 IOPS, 16.59 MiB/s [2024-12-10T03:13:46.254Z] [2024-12-10 04:13:46.030706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.865 [2024-12-10 04:13:46.031167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.865 [2024-12-10 04:13:46.031201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.865 [2024-12-10 04:13:46.031234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.865 [2024-12-10 04:13:46.031479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.865 [2024-12-10 04:13:46.031718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.865 [2024-12-10 04:13:46.031739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.865 [2024-12-10 04:13:46.031752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.865 [2024-12-10 04:13:46.031765] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.865 [2024-12-10 04:13:46.044081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.865 [2024-12-10 04:13:46.044447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.865 [2024-12-10 04:13:46.044475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.865 [2024-12-10 04:13:46.044491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.865 [2024-12-10 04:13:46.044746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.865 [2024-12-10 04:13:46.044980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.865 [2024-12-10 04:13:46.044999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.865 [2024-12-10 04:13:46.045011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.865 [2024-12-10 04:13:46.045023] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.865 [2024-12-10 04:13:46.057413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.865 [2024-12-10 04:13:46.057822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.865 [2024-12-10 04:13:46.057864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.865 [2024-12-10 04:13:46.057880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.865 [2024-12-10 04:13:46.058121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.865 [2024-12-10 04:13:46.058330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.865 [2024-12-10 04:13:46.058348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.865 [2024-12-10 04:13:46.058361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.865 [2024-12-10 04:13:46.058377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.865 [2024-12-10 04:13:46.070942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.865 [2024-12-10 04:13:46.071283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.865 [2024-12-10 04:13:46.071334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.865 [2024-12-10 04:13:46.071351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.865 [2024-12-10 04:13:46.071591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.865 [2024-12-10 04:13:46.071804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.865 [2024-12-10 04:13:46.071838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.865 [2024-12-10 04:13:46.071851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.865 [2024-12-10 04:13:46.071863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.865 [2024-12-10 04:13:46.084362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.865 [2024-12-10 04:13:46.084696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.865 [2024-12-10 04:13:46.084725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.865 [2024-12-10 04:13:46.084741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.865 [2024-12-10 04:13:46.084986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.865 [2024-12-10 04:13:46.085201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.865 [2024-12-10 04:13:46.085220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.865 [2024-12-10 04:13:46.085232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.865 [2024-12-10 04:13:46.085243] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.865 [2024-12-10 04:13:46.097791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.865 [2024-12-10 04:13:46.098209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.865 [2024-12-10 04:13:46.098236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.865 [2024-12-10 04:13:46.098252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.865 [2024-12-10 04:13:46.098474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.865 [2024-12-10 04:13:46.098730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.865 [2024-12-10 04:13:46.098751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.865 [2024-12-10 04:13:46.098764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.865 [2024-12-10 04:13:46.098776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.865 [2024-12-10 04:13:46.111022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.865 [2024-12-10 04:13:46.111424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.865 [2024-12-10 04:13:46.111457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.865 [2024-12-10 04:13:46.111489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.865 [2024-12-10 04:13:46.111759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.865 [2024-12-10 04:13:46.111991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.865 [2024-12-10 04:13:46.112009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.865 [2024-12-10 04:13:46.112021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.865 [2024-12-10 04:13:46.112033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.865 [2024-12-10 04:13:46.124193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.865 [2024-12-10 04:13:46.124620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.865 [2024-12-10 04:13:46.124649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.865 [2024-12-10 04:13:46.124664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.865 [2024-12-10 04:13:46.124906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.865 [2024-12-10 04:13:46.125115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.865 [2024-12-10 04:13:46.125133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.865 [2024-12-10 04:13:46.125146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.865 [2024-12-10 04:13:46.125157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.865 [2024-12-10 04:13:46.137274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.865 [2024-12-10 04:13:46.137643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.865 [2024-12-10 04:13:46.137687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.866 [2024-12-10 04:13:46.137703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.866 [2024-12-10 04:13:46.137972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.866 [2024-12-10 04:13:46.138166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.866 [2024-12-10 04:13:46.138184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.866 [2024-12-10 04:13:46.138196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.866 [2024-12-10 04:13:46.138208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.866 [2024-12-10 04:13:46.150450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.866 [2024-12-10 04:13:46.150851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.866 [2024-12-10 04:13:46.150879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.866 [2024-12-10 04:13:46.150895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.866 [2024-12-10 04:13:46.151142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.866 [2024-12-10 04:13:46.151351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.866 [2024-12-10 04:13:46.151369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.866 [2024-12-10 04:13:46.151381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.866 [2024-12-10 04:13:46.151392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.866 [2024-12-10 04:13:46.163505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.866 [2024-12-10 04:13:46.163878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.866 [2024-12-10 04:13:46.163906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.866 [2024-12-10 04:13:46.163921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.866 [2024-12-10 04:13:46.164157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.866 [2024-12-10 04:13:46.164368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.866 [2024-12-10 04:13:46.164386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.866 [2024-12-10 04:13:46.164398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.866 [2024-12-10 04:13:46.164409] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.866 [2024-12-10 04:13:46.176660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.866 [2024-12-10 04:13:46.177088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.866 [2024-12-10 04:13:46.177114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.866 [2024-12-10 04:13:46.177129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.866 [2024-12-10 04:13:46.177364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.866 [2024-12-10 04:13:46.177601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.866 [2024-12-10 04:13:46.177620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.866 [2024-12-10 04:13:46.177633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.866 [2024-12-10 04:13:46.177644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.866 [2024-12-10 04:13:46.189786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.866 [2024-12-10 04:13:46.190151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.866 [2024-12-10 04:13:46.190192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.866 [2024-12-10 04:13:46.190207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.866 [2024-12-10 04:13:46.190454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.866 [2024-12-10 04:13:46.190676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.866 [2024-12-10 04:13:46.190700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.866 [2024-12-10 04:13:46.190713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.866 [2024-12-10 04:13:46.190724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.866 [2024-12-10 04:13:46.202940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.866 [2024-12-10 04:13:46.203366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.866 [2024-12-10 04:13:46.203393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.866 [2024-12-10 04:13:46.203408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.866 [2024-12-10 04:13:46.203655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.866 [2024-12-10 04:13:46.203855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.866 [2024-12-10 04:13:46.203888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.866 [2024-12-10 04:13:46.203900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.866 [2024-12-10 04:13:46.203911] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.866 [2024-12-10 04:13:46.215985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.866 [2024-12-10 04:13:46.216382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.866 [2024-12-10 04:13:46.216409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.866 [2024-12-10 04:13:46.216424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.866 [2024-12-10 04:13:46.216657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.866 [2024-12-10 04:13:46.216867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.866 [2024-12-10 04:13:46.216886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.866 [2024-12-10 04:13:46.216897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.866 [2024-12-10 04:13:46.216908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.866 [2024-12-10 04:13:46.229018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.866 [2024-12-10 04:13:46.229351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.866 [2024-12-10 04:13:46.229379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.866 [2024-12-10 04:13:46.229395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.866 [2024-12-10 04:13:46.229616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.866 [2024-12-10 04:13:46.229839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.866 [2024-12-10 04:13:46.229872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.866 [2024-12-10 04:13:46.229884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.866 [2024-12-10 04:13:46.229901] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:51.866 [2024-12-10 04:13:46.242231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:51.866 [2024-12-10 04:13:46.242534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.866 [2024-12-10 04:13:46.242585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:51.866 [2024-12-10 04:13:46.242601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:51.866 [2024-12-10 04:13:46.242825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:51.866 [2024-12-10 04:13:46.243036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:51.866 [2024-12-10 04:13:46.243054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:51.866 [2024-12-10 04:13:46.243066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:51.866 [2024-12-10 04:13:46.243078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.126 [2024-12-10 04:13:46.255454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.126 [2024-12-10 04:13:46.255813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.126 [2024-12-10 04:13:46.255841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.126 [2024-12-10 04:13:46.255878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.126 [2024-12-10 04:13:46.256094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.126 [2024-12-10 04:13:46.256304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.126 [2024-12-10 04:13:46.256321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.126 [2024-12-10 04:13:46.256333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.126 [2024-12-10 04:13:46.256345] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.126 [2024-12-10 04:13:46.268571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.126 [2024-12-10 04:13:46.268935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.126 [2024-12-10 04:13:46.268962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.126 [2024-12-10 04:13:46.268977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.126 [2024-12-10 04:13:46.269214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.126 [2024-12-10 04:13:46.269424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.126 [2024-12-10 04:13:46.269442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.126 [2024-12-10 04:13:46.269454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.126 [2024-12-10 04:13:46.269466] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.126 [2024-12-10 04:13:46.281755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.126 [2024-12-10 04:13:46.282164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.126 [2024-12-10 04:13:46.282207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.126 [2024-12-10 04:13:46.282223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.126 [2024-12-10 04:13:46.282462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.126 [2024-12-10 04:13:46.282718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.126 [2024-12-10 04:13:46.282740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.126 [2024-12-10 04:13:46.282753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.126 [2024-12-10 04:13:46.282766] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.126 [2024-12-10 04:13:46.295058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.126 [2024-12-10 04:13:46.295372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.126 [2024-12-10 04:13:46.295413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.126 [2024-12-10 04:13:46.295428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.126 [2024-12-10 04:13:46.295675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.126 [2024-12-10 04:13:46.295905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.126 [2024-12-10 04:13:46.295924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.126 [2024-12-10 04:13:46.295936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.126 [2024-12-10 04:13:46.295946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.126 [2024-12-10 04:13:46.308283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.126 [2024-12-10 04:13:46.308582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.126 [2024-12-10 04:13:46.308625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.126 [2024-12-10 04:13:46.308641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.126 [2024-12-10 04:13:46.308865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.126 [2024-12-10 04:13:46.309075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.126 [2024-12-10 04:13:46.309093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.126 [2024-12-10 04:13:46.309105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.126 [2024-12-10 04:13:46.309117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.126 [2024-12-10 04:13:46.321391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.126 [2024-12-10 04:13:46.321777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.126 [2024-12-10 04:13:46.321804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.126 [2024-12-10 04:13:46.321820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.126 [2024-12-10 04:13:46.322059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.126 [2024-12-10 04:13:46.322254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.126 [2024-12-10 04:13:46.322271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.126 [2024-12-10 04:13:46.322283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.126 [2024-12-10 04:13:46.322295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.126 [2024-12-10 04:13:46.334581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.126 [2024-12-10 04:13:46.334965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.126 [2024-12-10 04:13:46.334993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.126 [2024-12-10 04:13:46.335008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.126 [2024-12-10 04:13:46.335231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.126 [2024-12-10 04:13:46.335440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.126 [2024-12-10 04:13:46.335458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.126 [2024-12-10 04:13:46.335470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.126 [2024-12-10 04:13:46.335481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.126 [2024-12-10 04:13:46.347691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.127 [2024-12-10 04:13:46.348181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.127 [2024-12-10 04:13:46.348223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.127 [2024-12-10 04:13:46.348239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.127 [2024-12-10 04:13:46.348491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.127 [2024-12-10 04:13:46.348734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.127 [2024-12-10 04:13:46.348754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.127 [2024-12-10 04:13:46.348767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.127 [2024-12-10 04:13:46.348779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.127 [2024-12-10 04:13:46.360892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.127 [2024-12-10 04:13:46.361277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.127 [2024-12-10 04:13:46.361303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.127 [2024-12-10 04:13:46.361318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.127 [2024-12-10 04:13:46.361535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.127 [2024-12-10 04:13:46.361744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.127 [2024-12-10 04:13:46.361768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.127 [2024-12-10 04:13:46.361780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.127 [2024-12-10 04:13:46.361792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.127 [2024-12-10 04:13:46.374007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.127 [2024-12-10 04:13:46.374422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.127 [2024-12-10 04:13:46.374473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.127 [2024-12-10 04:13:46.374488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.127 [2024-12-10 04:13:46.374735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.127 [2024-12-10 04:13:46.374950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.127 [2024-12-10 04:13:46.374968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.127 [2024-12-10 04:13:46.374980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.127 [2024-12-10 04:13:46.374992] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.127 [2024-12-10 04:13:46.387169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.127 [2024-12-10 04:13:46.387585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.127 [2024-12-10 04:13:46.387627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.127 [2024-12-10 04:13:46.387642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.127 [2024-12-10 04:13:46.387891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.127 [2024-12-10 04:13:46.388101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.127 [2024-12-10 04:13:46.388119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.127 [2024-12-10 04:13:46.388131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.127 [2024-12-10 04:13:46.388142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.127 [2024-12-10 04:13:46.400235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.127 [2024-12-10 04:13:46.400571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.127 [2024-12-10 04:13:46.400599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.127 [2024-12-10 04:13:46.400615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.127 [2024-12-10 04:13:46.400874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.127 [2024-12-10 04:13:46.401085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.127 [2024-12-10 04:13:46.401103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.127 [2024-12-10 04:13:46.401115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.127 [2024-12-10 04:13:46.401131] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.127 [2024-12-10 04:13:46.413384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.127 [2024-12-10 04:13:46.413758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.127 [2024-12-10 04:13:46.413801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.127 [2024-12-10 04:13:46.413817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.127 [2024-12-10 04:13:46.414069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.127 [2024-12-10 04:13:46.414263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.127 [2024-12-10 04:13:46.414281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.127 [2024-12-10 04:13:46.414292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.127 [2024-12-10 04:13:46.414304] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.127 [2024-12-10 04:13:46.426403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.127 [2024-12-10 04:13:46.426775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.127 [2024-12-10 04:13:46.426818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.127 [2024-12-10 04:13:46.426833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.127 [2024-12-10 04:13:46.427087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.127 [2024-12-10 04:13:46.427296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.127 [2024-12-10 04:13:46.427313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.127 [2024-12-10 04:13:46.427325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.127 [2024-12-10 04:13:46.427337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.127 [2024-12-10 04:13:46.439469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.127 [2024-12-10 04:13:46.439859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.127 [2024-12-10 04:13:46.439902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.127 [2024-12-10 04:13:46.439917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.127 [2024-12-10 04:13:46.440170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.127 [2024-12-10 04:13:46.440378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.127 [2024-12-10 04:13:46.440396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.127 [2024-12-10 04:13:46.440408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.127 [2024-12-10 04:13:46.440419] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.127 [2024-12-10 04:13:46.452574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.127 [2024-12-10 04:13:46.452944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.127 [2024-12-10 04:13:46.452986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.127 [2024-12-10 04:13:46.453002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.127 [2024-12-10 04:13:46.453256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.127 [2024-12-10 04:13:46.453465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.127 [2024-12-10 04:13:46.453483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.127 [2024-12-10 04:13:46.453494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.127 [2024-12-10 04:13:46.453505] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.127 [2024-12-10 04:13:46.465778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.127 [2024-12-10 04:13:46.466150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.127 [2024-12-10 04:13:46.466191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.127 [2024-12-10 04:13:46.466206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.127 [2024-12-10 04:13:46.466454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.127 [2024-12-10 04:13:46.466676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.128 [2024-12-10 04:13:46.466696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.128 [2024-12-10 04:13:46.466708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.128 [2024-12-10 04:13:46.466720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.128 [2024-12-10 04:13:46.478782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.128 [2024-12-10 04:13:46.479116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.128 [2024-12-10 04:13:46.479192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.128 [2024-12-10 04:13:46.479208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.128 [2024-12-10 04:13:46.479437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.128 [2024-12-10 04:13:46.479659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.128 [2024-12-10 04:13:46.479679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.128 [2024-12-10 04:13:46.479692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.128 [2024-12-10 04:13:46.479703] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.128 [2024-12-10 04:13:46.491942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.128 [2024-12-10 04:13:46.492306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.128 [2024-12-10 04:13:46.492349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.128 [2024-12-10 04:13:46.492364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.128 [2024-12-10 04:13:46.492649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.128 [2024-12-10 04:13:46.492878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.128 [2024-12-10 04:13:46.492897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.128 [2024-12-10 04:13:46.492910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.128 [2024-12-10 04:13:46.492922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2501792 Killed "${NVMF_APP[@]}" "$@" 00:25:52.128 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:52.128 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:52.128 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:52.128 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:52.128 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:52.128 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2502746 00:25:52.128 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:52.128 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2502746 00:25:52.128 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2502746 ']' 00:25:52.128 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.128 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:52.128 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.128 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:52.128 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:52.128 [2024-12-10 04:13:46.505426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.128 [2024-12-10 04:13:46.505803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.128 [2024-12-10 04:13:46.505833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.128 [2024-12-10 04:13:46.505850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.128 [2024-12-10 04:13:46.506093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.128 [2024-12-10 04:13:46.506316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.128 [2024-12-10 04:13:46.506336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.128 [2024-12-10 04:13:46.506348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.128 [2024-12-10 04:13:46.506361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.389 [2024-12-10 04:13:46.518988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.389 [2024-12-10 04:13:46.519307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.389 [2024-12-10 04:13:46.519350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.389 [2024-12-10 04:13:46.519373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.389 [2024-12-10 04:13:46.519610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.389 [2024-12-10 04:13:46.519831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.389 [2024-12-10 04:13:46.519864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.389 [2024-12-10 04:13:46.519876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.390 [2024-12-10 04:13:46.519888] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.390 [2024-12-10 04:13:46.532283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.390 [2024-12-10 04:13:46.532671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.390 [2024-12-10 04:13:46.532699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.390 [2024-12-10 04:13:46.532715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.390 [2024-12-10 04:13:46.532964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.390 [2024-12-10 04:13:46.533214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.390 [2024-12-10 04:13:46.533250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.390 [2024-12-10 04:13:46.533263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.390 [2024-12-10 04:13:46.533277] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.390 [2024-12-10 04:13:46.545647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.390 [2024-12-10 04:13:46.545992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.390 [2024-12-10 04:13:46.546019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.390 [2024-12-10 04:13:46.546034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.390 [2024-12-10 04:13:46.546251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.390 [2024-12-10 04:13:46.546451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.390 [2024-12-10 04:13:46.546470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.390 [2024-12-10 04:13:46.546482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.390 [2024-12-10 04:13:46.546494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.390 [2024-12-10 04:13:46.546591] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:52.390 [2024-12-10 04:13:46.546653] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.390 [2024-12-10 04:13:46.559070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.390 [2024-12-10 04:13:46.559480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.390 [2024-12-10 04:13:46.559507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.390 [2024-12-10 04:13:46.559532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.390 [2024-12-10 04:13:46.559786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.390 [2024-12-10 04:13:46.560002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.390 [2024-12-10 04:13:46.560022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.390 [2024-12-10 04:13:46.560034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.390 [2024-12-10 04:13:46.560045] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.390 [2024-12-10 04:13:46.572440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.390 [2024-12-10 04:13:46.572845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.390 [2024-12-10 04:13:46.572888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.390 [2024-12-10 04:13:46.572903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.390 [2024-12-10 04:13:46.573153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.390 [2024-12-10 04:13:46.573353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.390 [2024-12-10 04:13:46.573372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.390 [2024-12-10 04:13:46.573384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.390 [2024-12-10 04:13:46.573396] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.390 [2024-12-10 04:13:46.585931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.390 [2024-12-10 04:13:46.586319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.390 [2024-12-10 04:13:46.586347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.390 [2024-12-10 04:13:46.586363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.390 [2024-12-10 04:13:46.586613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.390 [2024-12-10 04:13:46.586841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.390 [2024-12-10 04:13:46.586860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.390 [2024-12-10 04:13:46.586874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.390 [2024-12-10 04:13:46.586886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.390 [2024-12-10 04:13:46.599193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.390 [2024-12-10 04:13:46.599629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.390 [2024-12-10 04:13:46.599658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.390 [2024-12-10 04:13:46.599674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.390 [2024-12-10 04:13:46.599907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.390 [2024-12-10 04:13:46.600129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.390 [2024-12-10 04:13:46.600148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.390 [2024-12-10 04:13:46.600160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.390 [2024-12-10 04:13:46.600172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.390 [2024-12-10 04:13:46.612623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.390 [2024-12-10 04:13:46.613079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.390 [2024-12-10 04:13:46.613122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.390 [2024-12-10 04:13:46.613138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.390 [2024-12-10 04:13:46.613379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.390 [2024-12-10 04:13:46.613607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.390 [2024-12-10 04:13:46.613627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.390 [2024-12-10 04:13:46.613640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.390 [2024-12-10 04:13:46.613652] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.390 [2024-12-10 04:13:46.620698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:52.390 [2024-12-10 04:13:46.625982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.390 [2024-12-10 04:13:46.626437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.390 [2024-12-10 04:13:46.626466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.390 [2024-12-10 04:13:46.626483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.390 [2024-12-10 04:13:46.626729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.390 [2024-12-10 04:13:46.626966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.390 [2024-12-10 04:13:46.626984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.390 [2024-12-10 04:13:46.626998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.390 [2024-12-10 04:13:46.627011] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.390 [2024-12-10 04:13:46.639338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.390 [2024-12-10 04:13:46.639896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.390 [2024-12-10 04:13:46.639934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.390 [2024-12-10 04:13:46.639970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.390 [2024-12-10 04:13:46.640235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.390 [2024-12-10 04:13:46.640439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.390 [2024-12-10 04:13:46.640469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.390 [2024-12-10 04:13:46.640485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.390 [2024-12-10 04:13:46.640500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.390 [2024-12-10 04:13:46.652748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.390 [2024-12-10 04:13:46.653142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.390 [2024-12-10 04:13:46.653169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.390 [2024-12-10 04:13:46.653186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.390 [2024-12-10 04:13:46.653422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.390 [2024-12-10 04:13:46.653672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.391 [2024-12-10 04:13:46.653694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.391 [2024-12-10 04:13:46.653707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.391 [2024-12-10 04:13:46.653719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.391 [2024-12-10 04:13:46.666106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.391 [2024-12-10 04:13:46.666479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.391 [2024-12-10 04:13:46.666522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.391 [2024-12-10 04:13:46.666538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.391 [2024-12-10 04:13:46.666792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.391 [2024-12-10 04:13:46.667031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.391 [2024-12-10 04:13:46.667049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.391 [2024-12-10 04:13:46.667062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.391 [2024-12-10 04:13:46.667074] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.391 [2024-12-10 04:13:46.677652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.391 [2024-12-10 04:13:46.677683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.391 [2024-12-10 04:13:46.677712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.391 [2024-12-10 04:13:46.677724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.391 [2024-12-10 04:13:46.677733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.391 [2024-12-10 04:13:46.679043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.391 [2024-12-10 04:13:46.679102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:52.391 [2024-12-10 04:13:46.679106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.391 [2024-12-10 04:13:46.679413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.391 [2024-12-10 04:13:46.679787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.391 [2024-12-10 04:13:46.679823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.391 [2024-12-10 04:13:46.679841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.391 [2024-12-10 04:13:46.680072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.391 [2024-12-10 04:13:46.680286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.391 [2024-12-10 04:13:46.680307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.391 [2024-12-10 04:13:46.680320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.391 [2024-12-10 04:13:46.680333] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.391 [2024-12-10 04:13:46.693049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.391 [2024-12-10 04:13:46.693575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.391 [2024-12-10 04:13:46.693616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.391 [2024-12-10 04:13:46.693637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.391 [2024-12-10 04:13:46.693877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.391 [2024-12-10 04:13:46.694096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.391 [2024-12-10 04:13:46.694118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.391 [2024-12-10 04:13:46.694135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.391 [2024-12-10 04:13:46.694151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.391 [2024-12-10 04:13:46.706731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.391 [2024-12-10 04:13:46.707254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.391 [2024-12-10 04:13:46.707295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.391 [2024-12-10 04:13:46.707331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.391 [2024-12-10 04:13:46.707580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.391 [2024-12-10 04:13:46.707799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.391 [2024-12-10 04:13:46.707821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.391 [2024-12-10 04:13:46.707838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.391 [2024-12-10 04:13:46.707853] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.391 [2024-12-10 04:13:46.720220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.391 [2024-12-10 04:13:46.720771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.391 [2024-12-10 04:13:46.720812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.391 [2024-12-10 04:13:46.720833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.391 [2024-12-10 04:13:46.721086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.391 [2024-12-10 04:13:46.721305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.391 [2024-12-10 04:13:46.721326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.391 [2024-12-10 04:13:46.721342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.391 [2024-12-10 04:13:46.721357] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.391 [2024-12-10 04:13:46.733698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.391 [2024-12-10 04:13:46.734180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.391 [2024-12-10 04:13:46.734216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.391 [2024-12-10 04:13:46.734235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.391 [2024-12-10 04:13:46.734459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.391 [2024-12-10 04:13:46.734709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.391 [2024-12-10 04:13:46.734731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.391 [2024-12-10 04:13:46.734748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.391 [2024-12-10 04:13:46.734763] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.391 [2024-12-10 04:13:46.747317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.391 [2024-12-10 04:13:46.747931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.391 [2024-12-10 04:13:46.747976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.391 [2024-12-10 04:13:46.747997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.391 [2024-12-10 04:13:46.748240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.391 [2024-12-10 04:13:46.748460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.391 [2024-12-10 04:13:46.748481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.391 [2024-12-10 04:13:46.748500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.391 [2024-12-10 04:13:46.748515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.391 [2024-12-10 04:13:46.760976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.391 [2024-12-10 04:13:46.761493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.391 [2024-12-10 04:13:46.761530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.391 [2024-12-10 04:13:46.761557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.391 [2024-12-10 04:13:46.761785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.391 [2024-12-10 04:13:46.762020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.391 [2024-12-10 04:13:46.762042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.391 [2024-12-10 04:13:46.762070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.391 [2024-12-10 04:13:46.762086] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.653 [2024-12-10 04:13:46.774646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.653 [2024-12-10 04:13:46.775018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.653 [2024-12-10 04:13:46.775047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.653 [2024-12-10 04:13:46.775064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.653 [2024-12-10 04:13:46.775301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.653 [2024-12-10 04:13:46.775525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.653 [2024-12-10 04:13:46.775557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.653 [2024-12-10 04:13:46.775574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.653 [2024-12-10 04:13:46.775587] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.653 [2024-12-10 04:13:46.788254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.653 [2024-12-10 04:13:46.788603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.653 [2024-12-10 04:13:46.788633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.653 [2024-12-10 04:13:46.788650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.653 [2024-12-10 04:13:46.788868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.653 [2024-12-10 04:13:46.789088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.653 [2024-12-10 04:13:46.789108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.653 [2024-12-10 04:13:46.789122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.653 [2024-12-10 04:13:46.789135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.653 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.653 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:52.653 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:52.653 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:52.653 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:52.653 [2024-12-10 04:13:46.801947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.653 [2024-12-10 04:13:46.802284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.653 [2024-12-10 04:13:46.802313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.653 [2024-12-10 04:13:46.802329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.653 [2024-12-10 04:13:46.802554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.653 [2024-12-10 04:13:46.802782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.653 [2024-12-10 04:13:46.802803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.653 [2024-12-10 04:13:46.802816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.653 [2024-12-10 04:13:46.802844] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.653 [2024-12-10 04:13:46.815475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.653 [2024-12-10 04:13:46.815874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.653 [2024-12-10 04:13:46.815904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.653 [2024-12-10 04:13:46.815920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.653 [2024-12-10 04:13:46.816151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.653 [2024-12-10 04:13:46.816365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.653 [2024-12-10 04:13:46.816385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.653 [2024-12-10 04:13:46.816398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.653 [2024-12-10 04:13:46.816410] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.653 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.653 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:52.653 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.653 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:52.653 [2024-12-10 04:13:46.827862] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.653 [2024-12-10 04:13:46.829133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.653 [2024-12-10 04:13:46.829482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.653 [2024-12-10 04:13:46.829511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.653 [2024-12-10 04:13:46.829527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.653 [2024-12-10 04:13:46.829751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.653 [2024-12-10 04:13:46.829983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.653 [2024-12-10 04:13:46.830004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.653 [2024-12-10 04:13:46.830017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.653 [2024-12-10 04:13:46.830029] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.653 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.653 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:52.653 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.653 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:52.653 [2024-12-10 04:13:46.842966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.653 [2024-12-10 04:13:46.843380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.653 [2024-12-10 04:13:46.843412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.653 [2024-12-10 04:13:46.843430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.653 [2024-12-10 04:13:46.843659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.653 [2024-12-10 04:13:46.843897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.654 [2024-12-10 04:13:46.843918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.654 [2024-12-10 04:13:46.843933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.654 [2024-12-10 04:13:46.843947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.654 [2024-12-10 04:13:46.856384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.654 [2024-12-10 04:13:46.856748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.654 [2024-12-10 04:13:46.856776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.654 [2024-12-10 04:13:46.856793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.654 [2024-12-10 04:13:46.857024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.654 [2024-12-10 04:13:46.857246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.654 [2024-12-10 04:13:46.857265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.654 [2024-12-10 04:13:46.857278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.654 [2024-12-10 04:13:46.857290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.654 [2024-12-10 04:13:46.869953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.654 [2024-12-10 04:13:46.870343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.654 [2024-12-10 04:13:46.870373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.654 [2024-12-10 04:13:46.870390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.654 [2024-12-10 04:13:46.870619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.654 [2024-12-10 04:13:46.870857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.654 [2024-12-10 04:13:46.870878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.654 [2024-12-10 04:13:46.870893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.654 [2024-12-10 04:13:46.870906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.654 Malloc0 00:25:52.654 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.654 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:52.654 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.654 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:52.654 [2024-12-10 04:13:46.883585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.654 [2024-12-10 04:13:46.884014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.654 [2024-12-10 04:13:46.884043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e02660 with addr=10.0.0.2, port=4420 00:25:52.654 [2024-12-10 04:13:46.884059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02660 is same with the state(6) to be set 00:25:52.654 [2024-12-10 04:13:46.884291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e02660 (9): Bad file descriptor 00:25:52.654 [2024-12-10 04:13:46.884505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.654 [2024-12-10 04:13:46.884541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.654 [2024-12-10 04:13:46.884565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.654 [2024-12-10 04:13:46.884579] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.654 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.654 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:52.654 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.654 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:52.654 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.654 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.654 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.654 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:52.654 [2024-12-10 04:13:46.897166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.654 [2024-12-10 04:13:46.897296] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.654 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.654 04:13:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2502079 00:25:52.654 [2024-12-10 04:13:46.968311] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:25:54.031 3611.17 IOPS, 14.11 MiB/s [2024-12-10T03:13:49.360Z] 4302.14 IOPS, 16.81 MiB/s [2024-12-10T03:13:50.296Z] 4827.38 IOPS, 18.86 MiB/s [2024-12-10T03:13:51.235Z] 5241.78 IOPS, 20.48 MiB/s [2024-12-10T03:13:52.171Z] 5568.50 IOPS, 21.75 MiB/s [2024-12-10T03:13:53.140Z] 5839.00 IOPS, 22.81 MiB/s [2024-12-10T03:13:54.075Z] 6056.08 IOPS, 23.66 MiB/s [2024-12-10T03:13:55.457Z] 6249.54 IOPS, 24.41 MiB/s [2024-12-10T03:13:56.390Z] 6407.86 IOPS, 25.03 MiB/s 00:26:02.001 Latency(us) 00:26:02.001 [2024-12-10T03:13:56.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.001 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:02.001 Verification LBA range: start 0x0 length 0x4000 00:26:02.001 Nvme1n1 : 15.00 6555.35 25.61 10105.80 0.00 7659.83 843.47 17767.54 00:26:02.001 [2024-12-10T03:13:56.390Z] =================================================================================================================== 00:26:02.001 [2024-12-10T03:13:56.390Z] Total : 6555.35 25.61 10105.80 0.00 7659.83 843.47 17767.54 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:02.001 rmmod nvme_tcp 00:26:02.001 rmmod nvme_fabrics 00:26:02.001 rmmod nvme_keyring 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2502746 ']' 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2502746 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2502746 ']' 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2502746 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.001 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2502746 00:26:02.261 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:02.261 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:02.261 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2502746' 00:26:02.261 killing process with pid 2502746 00:26:02.261 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2502746 00:26:02.261 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2502746 00:26:02.522 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:02.522 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:02.522 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:02.522 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:02.522 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:02.522 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:02.522 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:02.522 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:02.522 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:02.522 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.522 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.522 04:13:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.428 04:13:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:04.428 00:26:04.428 real 0m22.646s 00:26:04.428 user 1m0.901s 00:26:04.428 sys 0m4.101s 00:26:04.428 04:13:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.428 04:13:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:04.428 ************************************ 00:26:04.428 END TEST nvmf_bdevperf 00:26:04.428 ************************************ 00:26:04.428 04:13:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:04.428 04:13:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:04.428 04:13:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.428 04:13:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.428 ************************************ 00:26:04.428 START TEST nvmf_target_disconnect 00:26:04.428 ************************************ 00:26:04.428 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:04.687 * Looking for test storage... 00:26:04.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:04.687 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:04.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.688 --rc genhtml_branch_coverage=1 00:26:04.688 --rc genhtml_function_coverage=1 00:26:04.688 --rc genhtml_legend=1 00:26:04.688 --rc geninfo_all_blocks=1 00:26:04.688 --rc geninfo_unexecuted_blocks=1 00:26:04.688 00:26:04.688 ' 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:04.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.688 --rc genhtml_branch_coverage=1 00:26:04.688 --rc genhtml_function_coverage=1 00:26:04.688 --rc genhtml_legend=1 00:26:04.688 --rc geninfo_all_blocks=1 00:26:04.688 --rc geninfo_unexecuted_blocks=1 00:26:04.688 00:26:04.688 ' 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:04.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.688 --rc genhtml_branch_coverage=1 00:26:04.688 --rc genhtml_function_coverage=1 00:26:04.688 --rc genhtml_legend=1 00:26:04.688 --rc geninfo_all_blocks=1 00:26:04.688 --rc geninfo_unexecuted_blocks=1 00:26:04.688 00:26:04.688 ' 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:04.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.688 --rc genhtml_branch_coverage=1 00:26:04.688 --rc genhtml_function_coverage=1 00:26:04.688 --rc genhtml_legend=1 00:26:04.688 --rc geninfo_all_blocks=1 00:26:04.688 --rc geninfo_unexecuted_blocks=1 00:26:04.688 00:26:04.688 ' 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:04.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:04.688 04:13:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:07.220 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:07.220 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:07.220 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:07.220 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:07.221 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:07.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:26:07.221 00:26:07.221 --- 10.0.0.2 ping statistics --- 00:26:07.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.221 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:07.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:26:07.221 00:26:07.221 --- 10.0.0.1 ping statistics --- 00:26:07.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.221 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:07.221 ************************************ 00:26:07.221 START TEST nvmf_target_disconnect_tc1 00:26:07.221 ************************************ 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:07.221 [2024-12-10 04:14:01.286963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-12-10 04:14:01.287048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23aaf40 with addr=10.0.0.2, port=4420 00:26:07.221 [2024-12-10 04:14:01.287085] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:07.221 [2024-12-10 04:14:01.287116] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:07.221 [2024-12-10 04:14:01.287129] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:26:07.221 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:07.221 Initializing NVMe Controllers 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:07.221 00:26:07.221 real 0m0.096s 00:26:07.221 user 0m0.050s 00:26:07.221 sys 0m0.046s 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:07.221 ************************************ 00:26:07.221 END TEST nvmf_target_disconnect_tc1 00:26:07.221 ************************************ 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:07.221 ************************************ 00:26:07.221 START TEST nvmf_target_disconnect_tc2 00:26:07.221 ************************************ 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:07.221 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:07.222 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:07.222 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2505915 00:26:07.222 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:07.222 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2505915 00:26:07.222 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2505915 ']' 00:26:07.222 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.222 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:07.222 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.222 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:07.222 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:07.222 [2024-12-10 04:14:01.404923] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:07.222 [2024-12-10 04:14:01.405003] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.222 [2024-12-10 04:14:01.477198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:07.222 [2024-12-10 04:14:01.535445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.222 [2024-12-10 04:14:01.535500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.222 [2024-12-10 04:14:01.535536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.222 [2024-12-10 04:14:01.535560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.222 [2024-12-10 04:14:01.535576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.222 [2024-12-10 04:14:01.537071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:07.222 [2024-12-10 04:14:01.537132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:07.222 [2024-12-10 04:14:01.537197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:07.222 [2024-12-10 04:14:01.537200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:07.481 Malloc0 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:07.481 [2024-12-10 04:14:01.724798] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:07.481 [2024-12-10 04:14:01.753091] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2506052 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:07.481 04:14:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:09.387 04:14:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2505915 00:26:09.387 04:14:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 [2024-12-10 04:14:03.780312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 [2024-12-10 04:14:03.780656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Write completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.674 Read completed with error (sct=0, sc=8) 00:26:09.674 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 [2024-12-10 04:14:03.780981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Read completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 Write completed with error (sct=0, sc=8) 00:26:09.675 starting I/O failed 00:26:09.675 [2024-12-10 04:14:03.781281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.675 [2024-12-10 04:14:03.781483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.781532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.781663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.781691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.781847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.781874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.782004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.782030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.782152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.782179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.782296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.782323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.782437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.782463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.782567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.782611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.782720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.782746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.782859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.782885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.782964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.782989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.783073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.783099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.783219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.783266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.783393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.783420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.783541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.783574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.783695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.783730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.783845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.783872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.783952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.783979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.784097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.784124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.784248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.784274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.784382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.784430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.784588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.784615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.784707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.675 [2024-12-10 04:14:03.784732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.675 qpair failed and we were unable to recover it. 00:26:09.675 [2024-12-10 04:14:03.784870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.784896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.785038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.785063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.785207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.785232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.785318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.785344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.785472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.785512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.785657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.785696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.785808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.785836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.786000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.786027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.786145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.786170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.786278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.786304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.786387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.786414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.786498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.786525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.786630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.786656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.786737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.786763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.786845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.786871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.786978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.787003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.787119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.787145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.787229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.787255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.787371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.787399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.787525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.787557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.787686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.787714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.787799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.787825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.787917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.787943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.788029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.788055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.788172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.788199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.788344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.788370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.788504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.788543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.788642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.788669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.788754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.788780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.788895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.788921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.789000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.789025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.789140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.789166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.789281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.789312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.789414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.789454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.789578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.789608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.676 [2024-12-10 04:14:03.789720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.676 [2024-12-10 04:14:03.789748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.676 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.789836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.789862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.789975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.790001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.790087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.790115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.790230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.790257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.790352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.790377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.790462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.790488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.790584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.790610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.790748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.790774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.790884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.790910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.791054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.791080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.791177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.791203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.791289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.791314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.791395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.791421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.791531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.791564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.791650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.791676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.791770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.791795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.791891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.791917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.792008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.792034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.792128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.792167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.792291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.792320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.792405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.792431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.792553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.792580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.792686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.792712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.792817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.792857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.792979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.793006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.793178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.793243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.793363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.793390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.793531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.793563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.793680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.793706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.793797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.793823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.793915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.793941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.794083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.794110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.794196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.794222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.794327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.794353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.794448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.794474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.794571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.794601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.794722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.794753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.794851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.677 [2024-12-10 04:14:03.794878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.677 qpair failed and we were unable to recover it. 00:26:09.677 [2024-12-10 04:14:03.795052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.795077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.795193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.795219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.795304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.795329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.795419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.795446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.795554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.795580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.795673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.795699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.795818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.795906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.795982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.796008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.796095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.796122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.796213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.796239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.796351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.796376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.796457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.796485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.796618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.796645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.796743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.796771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.796849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.796876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.796993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.797019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.797131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.797157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.797274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.797302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.797389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.797416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.797560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.797587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.797702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.797727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.797814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.797840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.797931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.797957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.798104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.798131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.798244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.798270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.798378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.798409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.798524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.798565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.798652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.798680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.798827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.798853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.798966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.798993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.799117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.799143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.799258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.799284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.678 qpair failed and we were unable to recover it. 00:26:09.678 [2024-12-10 04:14:03.799400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.678 [2024-12-10 04:14:03.799427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.799523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.799571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.799677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.799715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.799839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.799866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.799948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.799974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.800147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.800173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.800292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.800318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.800431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.800470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.800585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.800614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.800710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.800738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.800851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.800876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.800985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.801011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.801128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.801153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.801267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.801294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.801412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.801440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.801532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.801573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.801661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.801687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.801775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.801801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.801877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.801903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.801988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.802014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.802106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.802133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.802210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.802236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.802346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.802373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.802455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.802481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.802582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.802609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.802690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.802716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.802806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.802832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.802945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.802972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.803054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.803081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.803204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.803233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.803380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.803408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.803522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.803553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.803640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.803666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.803754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.803785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.803862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.803889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.803978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.804004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.804088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.804115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.804207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.804234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.679 [2024-12-10 04:14:03.804312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.679 [2024-12-10 04:14:03.804338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.679 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.804430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.804457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.804554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.804582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.804716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.804754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.804845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.804871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.805011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.805037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.805126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.805152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.805257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.805282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.805397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.805422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.805509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.805535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.805648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.805674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.805793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.805818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.805957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.805982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.806089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.806115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.806198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.806226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.806346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.806374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.806469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.806508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.806599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.806626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.806743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.806768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.806883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.806909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.806996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.807022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.807130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.807157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.807268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.807300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.807388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.807415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.807499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.807526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.807644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.807671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.807786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.807812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.807901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.807927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.808050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.808078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.808196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.808222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.808339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.808365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.808439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.808464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.808591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.808631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.808727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.808766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.808882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.808909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.809052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.809078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.809302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.809328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.809418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.809446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.809565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.809593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.680 [2024-12-10 04:14:03.809687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.680 [2024-12-10 04:14:03.809713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.680 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.809827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.809853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.809962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.809989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.810100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.810126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.810242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.810269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.810388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.810416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.810523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.810570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.810693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.810720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.810858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.810883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.810993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.811019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.811097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.811127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.811213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.811241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.811364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.811390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.811504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.811532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.811625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.811652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.811790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.811816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.811902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.811928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.812048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.812075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.812191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.812217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.812299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.812325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.812440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.812465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.812542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.812573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.812656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.812680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.812797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.812822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.812942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.812967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.813053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.813079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.813191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.813219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.813379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.813418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.813524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.813573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.813666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.813694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.813812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.813838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.813955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.813982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.814126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.814153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.814254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.814281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.814400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.814428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.814519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.814551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.814641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.814667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.814837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.814889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.814985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.815051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.681 qpair failed and we were unable to recover it. 00:26:09.681 [2024-12-10 04:14:03.815222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.681 [2024-12-10 04:14:03.815276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.815361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.815388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.815473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.815499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.815644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.815671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.815758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.815784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.815899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.815925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.816012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.816038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.816156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.816182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.816329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.816356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.816469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.816495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.816582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.816609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.816748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.816780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.816866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.816893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.816979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.817004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.817098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.817126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.817229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.817268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.817388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.817415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.817530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.817568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.817685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.817711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.817788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.817814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.817904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.817930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.818043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.818068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.818157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.818182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.818299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.818325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.818437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.818462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.818556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.818582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.818698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.818724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.818814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.818841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.818926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.818952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.819045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.819072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.819182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.819208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.819291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.819317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.819405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.819432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.819574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.819601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.819712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.819738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.819844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.819870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.820010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.820035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.820119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.820144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.682 [2024-12-10 04:14:03.820250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.682 [2024-12-10 04:14:03.820280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.682 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.820397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.820423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.820523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.820570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.820682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.820721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.820807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.820835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.820919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.820946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.821058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.821085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.821221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.821248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.821330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.821357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.821443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.821468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.821584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.821610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.821722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.821747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.821857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.821882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.821986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.822012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.822128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.822155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.822275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.822306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.822418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.822443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.822558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.822584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.822703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.822730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.822859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.822897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.823020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.823047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.823135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.823162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.823280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.823306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.823416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.823443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.823563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.823589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.823669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.823697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.823774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.823800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.823915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.823943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.824034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.824060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.824152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.824179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.824287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.824313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.824424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.824451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.824533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.824566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.683 [2024-12-10 04:14:03.824686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.683 [2024-12-10 04:14:03.824712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.683 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.824819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.824845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.824959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.824986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.825066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.825094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.825185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.825210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.825285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.825311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.825422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.825448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.825540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.825581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.825696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.825722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.825807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.825834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.825954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.825981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.826088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.826114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.826221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.826248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.826358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.826385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.826513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.826558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.826683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.826711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.826799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.826826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.826942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.826968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.827087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.827113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.827227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.827253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.827332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.827359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.827490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.827529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.827631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.827660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.827753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.827778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.827864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.827890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.828024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.828049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.828153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.828179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.828291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.828317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.828398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.828425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.828538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.828585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.828668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.828696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.828825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.828863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.828960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.828986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.829125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.829152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.829235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.829265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.829355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.829380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.829492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.829519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.684 qpair failed and we were unable to recover it. 00:26:09.684 [2024-12-10 04:14:03.829616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.684 [2024-12-10 04:14:03.829644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.829743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.829782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.829991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.830045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.830196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.830249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.830365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.830391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.830500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.830527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.830611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.830637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.830749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.830775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.830894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.830921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.831049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.831099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.831300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.831351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.831470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.831496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.831618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.831647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.831767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.831793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.831930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.831956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.832068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.832094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.832180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.832206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.832322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.832350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.832461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.832486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.832564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.832590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.832699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.832725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.832814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.832841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.832917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.832942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.833052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.833078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.833173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.833199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.833353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.833393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.833492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.833520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.833634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.833661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.833741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.833767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.833854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.833879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.833962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.833988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.834069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.834095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.834220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.834247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.834344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.834383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.834527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.834561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.834672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.834697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.834807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.834832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.834951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.834981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.835096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.685 [2024-12-10 04:14:03.835121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.685 qpair failed and we were unable to recover it. 00:26:09.685 [2024-12-10 04:14:03.835227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.835252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.835343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.835367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.835477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.835504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.835607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.835634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.835723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.835750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.835887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.835912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.836058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.836084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.836190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.836216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.836331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.836356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.836492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.836518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.836660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.836686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.836829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.836856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.836971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.836997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.837080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.837106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.837226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.837251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.837367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.837392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.837521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.837565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.837687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.837714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.837827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.837852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.837940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.837966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.838079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.838105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.838190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.838215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.838291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.838315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.838424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.838449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.838588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.838614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.838724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.838751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.838865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.838891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.839009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.839034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.839154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.839179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.839260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.839286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.839397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.839423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.839568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.839596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.839687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.839713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.839828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.839853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.839992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.840018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.840192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.840217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.840340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.840380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.840500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.840528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.686 [2024-12-10 04:14:03.840641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.686 [2024-12-10 04:14:03.840684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.686 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.840782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.840809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.840950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.840976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.841065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.841090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.841181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.841206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.841292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.841317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.841395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.841421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.841503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.841529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.841662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.841690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.841808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.841834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.841923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.841950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.842069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.842095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.842230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.842256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.842338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.842363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.842509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.842536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.842640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.842666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.842811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.842861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.843007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.843059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.843161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.843217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.843342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.843381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.843503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.843530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.843663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.843702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.843845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.843904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.844101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.844153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.844272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.844323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.844416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.844442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.844558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.844585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.844697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.844729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.844818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.844844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.844962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.844987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.845073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.845099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.845215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.845240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.845330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.845355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.845466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.845492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.845578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.845605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.845709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.845748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.845875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.845903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.687 qpair failed and we were unable to recover it. 00:26:09.687 [2024-12-10 04:14:03.846019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.687 [2024-12-10 04:14:03.846045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.846189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.846215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.846296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.846324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.846414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.846441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.846592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.846619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.846760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.846787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.846872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.846898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.847003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.847029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.847140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.847167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.847274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.847300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.847407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.847432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.847569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.847594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.847684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.847710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.847793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.847819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.847956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.847980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.848073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.848111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.848267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.848295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.848411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.848443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.848530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.848562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.848651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.848679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.848800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.848829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.848945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.848972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.849086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.849112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.849188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.849213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.849322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.849347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.849436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.849462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.849614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.849640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.849755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.849780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.849890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.849916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.850034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.850060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.850150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.850175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.850294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.850319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.850428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.850455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.850551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.850578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.850673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.850699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.850782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.850808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.688 qpair failed and we were unable to recover it. 00:26:09.688 [2024-12-10 04:14:03.850919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.688 [2024-12-10 04:14:03.850944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.851020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.851046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.851162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.851188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.851295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.851321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.851432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.851459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.851551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.851578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.851710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.851736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.851850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.851876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.851988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.852014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.852133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.852158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.852250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.852276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.852394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.852422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.852539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.852571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.852655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.852680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.852787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.852812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.852903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.852928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.853069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.853094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.853214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.853240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.853348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.853374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.853462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.853487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.853609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.853637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.853752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.853782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.853873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.853899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.854013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.854038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.854122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.854148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.854236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.854261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.854355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.854381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.854463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.854489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.854562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.854587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.854693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.854718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.854799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.854825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.854913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.854940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.855052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.855079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.855192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.855218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.855354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.855380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.855476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.855501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.855609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.855648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.855761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.855799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.855945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.689 [2024-12-10 04:14:03.855972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.689 qpair failed and we were unable to recover it. 00:26:09.689 [2024-12-10 04:14:03.856081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.856106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.856208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.856233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.856344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.856369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.856475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.856500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.856622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.856649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.856765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.856791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.856909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.856935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.857045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.857070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.857200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.857239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.857380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.857419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.857557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.857597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.857691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.857718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.857835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.857861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.858000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.858026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.858164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.858190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.858299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.858324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.858429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.858468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.858568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.858599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.858747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.858773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.858924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.858951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.859095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.859121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.859240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.859266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.859356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.859387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.859505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.859534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.859636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.859663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.859772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.859798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.859887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.859912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.859986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.860011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.860094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.860120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.860234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.860260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.860371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.860397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.860520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.860567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.860694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.860721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.860837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.860864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.860988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.861014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.861099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.861126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.861247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.861273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.861393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.861421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.690 [2024-12-10 04:14:03.861510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.690 [2024-12-10 04:14:03.861539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.690 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.861675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.861713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.861809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.861836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.861950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.861975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.862094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.862119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.862218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.862275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.862391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.862418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.862537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.862568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.862683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.862709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.862791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.862816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.862918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.862944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.863040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.863066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.863157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.863183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.863295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.863321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.863457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.863483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.863569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.863596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.863708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.863733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.863852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.863878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.864016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.864042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.864126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.864151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.864255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.864281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.864368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.864395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.864508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.864535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.864662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.864688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.864802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.864828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.864942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.864968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.865048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.865074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.865218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.865243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.865350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.865376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.865481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.865506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.865617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.865644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.865758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.865783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.865875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.865900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.866013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.866039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.866129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.866168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.866316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.866345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.866429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.866456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.866569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.866598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.866714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.866742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.691 [2024-12-10 04:14:03.866862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.691 [2024-12-10 04:14:03.866888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.691 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.867003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.867030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.867145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.867172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.867290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.867316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.867422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.867447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.867532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.867566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.867653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.867681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.867818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.867843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.867987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.868012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.868122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.868148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.868230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.868256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.868354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.868394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.868513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.868555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.868674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.868700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.868816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.868842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.868984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.869009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.869124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.869151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.869256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.869282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.869380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.869420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.869569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.869596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.869736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.869762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.869845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.869870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.869986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.870012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.870100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.870127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.870268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.870294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.870413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.870439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.870527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.870558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.870639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.870666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.870779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.870805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.870886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.870912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.871027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.871055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.871131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.871157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.871238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.871266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.871380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.871406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.692 [2024-12-10 04:14:03.871481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.692 [2024-12-10 04:14:03.871507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.692 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.871635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.871662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.871774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.871800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.871920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.871945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.872059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.872084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.872203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.872231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.872316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.872341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.872426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.872451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.872535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.872568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.872682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.872708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.872835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.872874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.873001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.873028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.873142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.873169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.873253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.873279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.873386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.873414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.873528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.873562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.873690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.873715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.873857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.873883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.873992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.874023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.874141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.874170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.874253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.874280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.874427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.874467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.874604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.874632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.874751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.874780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.874897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.874924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.875068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.875094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.875209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.875235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.875335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.875361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.875454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.875480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.875597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.875624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.875735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.875760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.875870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.875896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.875982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.876008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.876121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.876149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.876275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.876301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.876415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.876441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.876532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.876564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.876705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.693 [2024-12-10 04:14:03.876731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.693 qpair failed and we were unable to recover it. 00:26:09.693 [2024-12-10 04:14:03.876846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.876871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.876987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.877014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.877093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.877120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.877202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.877227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.877337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.877364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.877518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.877564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.877699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.877739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.877837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.877865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.878016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.878065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.878183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.878209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.878295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.878321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.878430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.878457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.878570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.878605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.878686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.878713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.878799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.878825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.878906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.878932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.879020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.879046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.879132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.879160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.879239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.879266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.879403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.879429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.879507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.879538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.879661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.879687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.879776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.879802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.879920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.879945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.880066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.880091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.880172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.880197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.880341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.880367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.880474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.880513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.880628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.880667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.880801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.880828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.880974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.881018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.881161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.881203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.881311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.881336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.881471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.881496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.881629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.881662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.881754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.881782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.881885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.694 [2024-12-10 04:14:03.881937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.694 qpair failed and we were unable to recover it. 00:26:09.694 [2024-12-10 04:14:03.882070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.882158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.882359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.882388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.882477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.882504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.882603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.882630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.882755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.882781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.882872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.882899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.883007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.883033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.883173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.883199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.883283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.883309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.883424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.883450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.883574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.883615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.883709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.883735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.883817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.883843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.883959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.883985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.884097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.884122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.884222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.884261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.884382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.884410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.884557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.884586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.884673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.884699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.884784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.884810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.884918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.884944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.885039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.885066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.885181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.885207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.885291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.885320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.885410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.885437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.885552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.885579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.885670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.885695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.885811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.885838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.885952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.885979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.886098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.886126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.886247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.886274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.886372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.886412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.886562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.886589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.886669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.886695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.886797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.886822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.886909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.886934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.887044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.887071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.887216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-12-10 04:14:03.887267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-12-10 04:14:03.887355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.887384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.887503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.887530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.887636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.887663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.887776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.887803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.887890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.887916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.888030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.888057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.888138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.888164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.888257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.888283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.888424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.888451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.888562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.888590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.888699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.888726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.888843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.888869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.889008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.889039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.889128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.889155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.889272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.889299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.889390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.889418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.889553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.889593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.889717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.889744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.889888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.889914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.890026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.890051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.890207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.890233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.890320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.890345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.890459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.890486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.890577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.890604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.890715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.890741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.890857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.890884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.891022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.891071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.891269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.891314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.891454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.891482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.891606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.891633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.891746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.891771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.891903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.891929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.892031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.892079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.892263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.892289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.892436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.892462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.892572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.892598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.892674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.892700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.892807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.892832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-12-10 04:14:03.892948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-12-10 04:14:03.892976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.893124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.893149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.893232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.893257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.893394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.893419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.893521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.893552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.893639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.893665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.893777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.893802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.893917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.893942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.894017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.894042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.894146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.894171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.894252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.894277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.894385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.894414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.894573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.894613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.894717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.894757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.894877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.894910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.895023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.895050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.895162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.895190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.895282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.895308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.895405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.895433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.895543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.895575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.895688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.895714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.895835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.895861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.895972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.895998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.896077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.896104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.896250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.896277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.896367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.896392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.896505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.896530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.896657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.896684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.896778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.896804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.896919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.896946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.897055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.897081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.897182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.897224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.897380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.897407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.897521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-12-10 04:14:03.897553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-12-10 04:14:03.897665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.897692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.897806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.897831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.897907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.897932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.898076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.898103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.898185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.898212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.898323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.898348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.898432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.898457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.898564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.898603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.898724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.898752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.898827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.898853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.898992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.899018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.899107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.899133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.899219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.899246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.899338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.899363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.899465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.899491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.899579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.899605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.899698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.899723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.899826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.899852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.899990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.900016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.900112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.900140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.900289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.900314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.900461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.900485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.900600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.900626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.900716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.900741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.900830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.900856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.900989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.901040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.901154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.901181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.901292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.901317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.901400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.901427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.901517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.901543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.901665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.901704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.901802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.901829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.901979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.902005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.902096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.902122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.902266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.902293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.902390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.902430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.902566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.902605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.902732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.902759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.902893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.698 [2024-12-10 04:14:03.902943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-12-10 04:14:03.903059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.903084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.903170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.903198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.903313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.903339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.903450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.903479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.903603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.903630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.903719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.903745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.903831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.903857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.903995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.904020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.904098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.904129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.904248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.904274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.904364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.904389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.904467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.904493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.904616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.904643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.904726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.904752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.904832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.904858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.905000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.905026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.905107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.905133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.905252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.905291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.905410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.905437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.905510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.905536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.905678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.905704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.905817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.905842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.905964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.905991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.906110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.906137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.906226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.906252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.906341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.906367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.906440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.906466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.906549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.906577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.906688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.906714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.906826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.906852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.906958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.906983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.907057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.907083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.907194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.907222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.907325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.907364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.907459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.907498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.907604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.907636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.907751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.907776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.907869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.907895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.908004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.699 [2024-12-10 04:14:03.908029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.699 qpair failed and we were unable to recover it. 00:26:09.699 [2024-12-10 04:14:03.908121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.908146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.908265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.908294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.908408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.908435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.908564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.908593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.908684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.908709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.908803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.908828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.908943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.908968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.909085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.909112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.909219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.909253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.909354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.909382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.909471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.909498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.909611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.909650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.909739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.909767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.909885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.909912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.910025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.910052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.910192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.910219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.910307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.910332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.910420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.910448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.910562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.910588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.910676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.910702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.910792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.910818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.910932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.910959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.911060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.911088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.911200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.911234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.911376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.911401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.911526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.911559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.911678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.911704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.911809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.911834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.912023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.912048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.912135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.912161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.912245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.912272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.912353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.912379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.912491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.912517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.912631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.912658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.912744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.912771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.912852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.912878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.912994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.913020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.913135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.700 [2024-12-10 04:14:03.913161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.700 qpair failed and we were unable to recover it. 00:26:09.700 [2024-12-10 04:14:03.913239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.913264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.913373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.913398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.913513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.913539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.913665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.913691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.913776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.913803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.913883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.913909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.914047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.914073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.914160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.914185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.914269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.914294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.914373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.914399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.914515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.914541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.914639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.914664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.914767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.914806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.914904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.914929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.915013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.915039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.915149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.915174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.915260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.915287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.915385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.915425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.915572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.915599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.915679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.915705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.915816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.915841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.915937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.915975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.916130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.916155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.916270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.916297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.916400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.916441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.916531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.916565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.916687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.916714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.916823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.916849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.916989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.917014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.917127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.917153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.917241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.917269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.701 [2024-12-10 04:14:03.917363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.701 [2024-12-10 04:14:03.917392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.701 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.917509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.917536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.917638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.917663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.917778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.917804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.917882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.917908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.918048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.918096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.918234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.918282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.918401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.918427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.918541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.918575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.918670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.918699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.918789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.918815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.918984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.919033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.919113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.919138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.919277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.919325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.919416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.919443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.919556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.919582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.919686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.919711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.919797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.919822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.919957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.920002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.920133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.920169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.920325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.920353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.920469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.920501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.920624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.920651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.920736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.920762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.920892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.920929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.921060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.921086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.921175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.921201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.921311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.921337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.921440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.921466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.921587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.921614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.921732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.921757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.921839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.921865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.921976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.922001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.922086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.922112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.922216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.922241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.922326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.922352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.922455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.922481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.922610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.922650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.922768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.922796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.702 qpair failed and we were unable to recover it. 00:26:09.702 [2024-12-10 04:14:03.922887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.702 [2024-12-10 04:14:03.922915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.923058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.923085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.923232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.923259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.923376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.923403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.923491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.923517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.923620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.923647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.923779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.923818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.923945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.923972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.924137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.924164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.924274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.924306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.924390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.924416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.924497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.924524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.924626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.924653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.924796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.924831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.924916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.924943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.925113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.925168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.925306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.925357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.925505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.925531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.925636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.925661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.925750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.925775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.925888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.925913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.926000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.926027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.926203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.926256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.926386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.926425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.926575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.926604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.926723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.926750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.926872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.926899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.927010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.927037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.927128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.927166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.927293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.927320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.927407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.927433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.927555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.927582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.927660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.927686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.927799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.927825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.927907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.927934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.928048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.928076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.928172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.928199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.928313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.928339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.703 [2024-12-10 04:14:03.928426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.703 [2024-12-10 04:14:03.928452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.703 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.928603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.928630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.928772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.928797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.928885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.928911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.929029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.929056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.929197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.929224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.929364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.929390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.929467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.929493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.929570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.929599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.929716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.929743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.929857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.929884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.929965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.929997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.930084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.930110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.930199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.930225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.930336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.930361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.930484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.930523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.930650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.930678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.930791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.930816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.930927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.930952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.931086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.931132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.931246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.931274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.931387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.931412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.931527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.931561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.931654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.931679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.931788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.931813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.931934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.931959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.932104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.932129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.932210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.932238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.932378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.932403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.932487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.932513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.932615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.932643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.932727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.932752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.932863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.932890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.932999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.933026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.933137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.933163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.933277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.933303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.933414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.933441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.933557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.933583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.933687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.704 [2024-12-10 04:14:03.933725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.704 qpair failed and we were unable to recover it. 00:26:09.704 [2024-12-10 04:14:03.933842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.933869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.933982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.934009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.934122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.934148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.934236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.934265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.934408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.934433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.934549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.934575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.934661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.934686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.934778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.934803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.934892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.934920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.935093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.935142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.935256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.935282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.935393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.935419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.935491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.935521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.935630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.935669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.935749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.935777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.935863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.935889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.935967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.935993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.936100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.936151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.936260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.936312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.936395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.936420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.936531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.936562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.936646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.936671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.936758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.936783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.936888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.936913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.937025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.937051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.937163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.937188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.937317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.937357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.937480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.937509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.937626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.937653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.937738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.937765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.937885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.937911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.937994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.938020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.938110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.938137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.938266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.938295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.938418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.938446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.938533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.938566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.938678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.938704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.938788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.938814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.705 [2024-12-10 04:14:03.938896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.705 [2024-12-10 04:14:03.938923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.705 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.938997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.939027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.939166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.939191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.939342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.939367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.939481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.939508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.939614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.939641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.939728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.939754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.939844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.939879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.940030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.940075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.940185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.940211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.940307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.940334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.940437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.940463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.940571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.940597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.940733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.940759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.940849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.940875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.940995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.941021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.941110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.941135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.941276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.941301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.941374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.941399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.941489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.941516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.941611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.941637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.941751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.941777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.941889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.941914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.942042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.942067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.942149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.942175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.942284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.942309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.942391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.942418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.942573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.942613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.942729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.942768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.942884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.942912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.943025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.943051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.943129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.943156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.943268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.943294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.943407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.943432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.706 qpair failed and we were unable to recover it. 00:26:09.706 [2024-12-10 04:14:03.943537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.706 [2024-12-10 04:14:03.943570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.943650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.943676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.943756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.943782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.943919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.943944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.944052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.944078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.944166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.944194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.944286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.944317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.944430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.944456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.944541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.944576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.944666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.944693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.944774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.944800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.944910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.944936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.945025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.945051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.945162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.945188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.945272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.945298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.945420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.945446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.945567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.945593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.945705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.945731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.945866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.945892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.946014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.946040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.946161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.946188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.946276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.946304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.946380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.946407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.946491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.946517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.946611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.946638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.946754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.946780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.946858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.946883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.946993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.947019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.947101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.947127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.947241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.947268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.947370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.947410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.947512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.947558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.947658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.947685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.947776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.947802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.947883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.947914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.948053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.948102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.948221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.948248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.948336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.948362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.948445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.948471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.707 qpair failed and we were unable to recover it. 00:26:09.707 [2024-12-10 04:14:03.948588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.707 [2024-12-10 04:14:03.948614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.948702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.948728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.948837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.948863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.948975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.949000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.949137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.949162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.949252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.949277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.949378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.949404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.949517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.949543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.949708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.949734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.949822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.949847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.949935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.949960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.950037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.950062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.950183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.950211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.950304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.950330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.950440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.950466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.950577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.950604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.950717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.950742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.950830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.950855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.951011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.951038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.951120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.951145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.951229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.951254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.951360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.951386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.951474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.951504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.951626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.951653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.951732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.951757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.951868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.951894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.952006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.952032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.952144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.952170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.952253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.952279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.952389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.952414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.952518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.952549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.952636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.952662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.952779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.952804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.952920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.952946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.953027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.953053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.953175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.953214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.953347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.953387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.953537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.953571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.708 [2024-12-10 04:14:03.953680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.708 [2024-12-10 04:14:03.953706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.708 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.953828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.953853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.953942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.953966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.954079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.954106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.954200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.954230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.954323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.954350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.954465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.954491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.954586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.954612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.954726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.954752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.954864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.954890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.955007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.955034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.955145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.955176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.955292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.955320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.955439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.955464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.955577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.955603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.955716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.955741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.955880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.955928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.956064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.956113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.956230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.956257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.956371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.956397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.956507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.956533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.956620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.956645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.956749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.956775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.956914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.956940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.957018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.957045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.957137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.957164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.957281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.957306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.957390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.957416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.957524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.957555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.957666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.957692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.957804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.957829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.957916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.957942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.958080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.958105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.958189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.958216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.958302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.958329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.958449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.958474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.958563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.958589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.958701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.958727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.709 [2024-12-10 04:14:03.958877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.709 [2024-12-10 04:14:03.958917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.709 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.959014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.959042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.959121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.959146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.959248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.959274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.959363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.959388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.959496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.959522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.959620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.959646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.959752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.959778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.959856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.959881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.959982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.960007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.960096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.960120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.960277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.960316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.960408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.960436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.960521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.960555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.960675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.960701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.960820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.960846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.960958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.960985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.961089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.961124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.961221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.961247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.961326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.961357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.961442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.961470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.961586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.961612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.961694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.961721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.961836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.961862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.962000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.962026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.962168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.962193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.962334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.962359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.962479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.962506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.962591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.962625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.962737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.962763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.710 [2024-12-10 04:14:03.962847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.710 [2024-12-10 04:14:03.962874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.710 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.962957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.962984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.963111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.963138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.963242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.963292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.963417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.963457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.963557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.963587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.963700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.963726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.963811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.963838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.963923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.963949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.964034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.964060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.964176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.964208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.964318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.964345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.964460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.964487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.964584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.964612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.964733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.964759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.964841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.964867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.964949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.964975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.965060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.965086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.965222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.965248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.965370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.965397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.965552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.965579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.965664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.965690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.965772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.965798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.965936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.965982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.966070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.966097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.966187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.966214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.966329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.966355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.966475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.966501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.966587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.966615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.966711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.966740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.966826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.966853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.966937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.966963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.967071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.967097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.967210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.967237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.967348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.967375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.967458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.967485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.967598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.967624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.967742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.967768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.967887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.711 [2024-12-10 04:14:03.967913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.711 qpair failed and we were unable to recover it. 00:26:09.711 [2024-12-10 04:14:03.968052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.968078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.968209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.968237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.968327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.968353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.968438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.968464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.968577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.968603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.968691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.968716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.968822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.968847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.968927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.968952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.969035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.969061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.969173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.969199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.969316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.969344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.969429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.969461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.969576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.969603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.969710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.969738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.969859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.969885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.969997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.970023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.970130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.970156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.970241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.970267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.970406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.970432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.970574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.970602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.970715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.970741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.970859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.970885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.970957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.970982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.971061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.971088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.971167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.971194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.971318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.971344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.971477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.971517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.971654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.971693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.971788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.971816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.971954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.971980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.972150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.972198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.972315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.972341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.972458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.972485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.972625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.972650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.972727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.972753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.972869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.972894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.972978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.973003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.973080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.973106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.973218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.712 [2024-12-10 04:14:03.973244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.712 qpair failed and we were unable to recover it. 00:26:09.712 [2024-12-10 04:14:03.973373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.973413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.973521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.973568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.973718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.973745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.973855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.973881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.973960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.973986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.974061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.974087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.974204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.974231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.974345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.974372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.974514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.974540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.974663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.974689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.974798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.974824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.974906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.974932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.975072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.975105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.975234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.975274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.975395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.975423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.975528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.975560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.975678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.975704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.975791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.975818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.975902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.975928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.976039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.976065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.976204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.976231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.976373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.976400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.976528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.976581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.976677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.976707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.976822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.976849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.976963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.976989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.977167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.977218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.977301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.977328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.977447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.977474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.977591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.977621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.977739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.977765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.977872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.977922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.978000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.978025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.978105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.978130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.978238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.978263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.978373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.978399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.978479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.978506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.978594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.713 [2024-12-10 04:14:03.978622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.713 qpair failed and we were unable to recover it. 00:26:09.713 [2024-12-10 04:14:03.978709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.978735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.978873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.978903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.978984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.979011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.979129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.979155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.979294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.979322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.979426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.979466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.979614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.979643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.979727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.979753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.979837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.979863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.979985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.980013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.980152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.980180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.980313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.980339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.980424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.980449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.980533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.980564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.980673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.980698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.980791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.980817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.980929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.980954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.981066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.981092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.981181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.981208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.981324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.981352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.981506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.981551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.981695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.981723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.981830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.981856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.981970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.981996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.982080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.982106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.982211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.982237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.982326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.982366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.982461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.982488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.982576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.982602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.982686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.982711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.982816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.982841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.982924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.982949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.983031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.983057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.983145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.983175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.983286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.983312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.983428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.983454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.983574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.983601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.983719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.983746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.714 [2024-12-10 04:14:03.983838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.714 [2024-12-10 04:14:03.983865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.714 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.983956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.983983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.984103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.984129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.984216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.984246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.984360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.984386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.984477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.984506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.984636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.984663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.984765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.984791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.984873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.984900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.984988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.985014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.985096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.985122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.985250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.985290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.985448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.985488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.985609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.985637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.985755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.985781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.985919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.985967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.986110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.986158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.986303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.986352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.986471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.986497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.986583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.986610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.986696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.986723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.986849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.986879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.987026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.987052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.987134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.987160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.987271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.987298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.987412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.987438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.987555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.987582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.987690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.987716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.987827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.987867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.988016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.988043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.988161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.988190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.988281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.988308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.988420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.988446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.988562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.988589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.988702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.715 [2024-12-10 04:14:03.988729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.715 qpair failed and we were unable to recover it. 00:26:09.715 [2024-12-10 04:14:03.988822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.988848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.988960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.988986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.989100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.989127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.989246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.989272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.989390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.989419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.989528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.989560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.989646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.989672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.989781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.989806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.989881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.989911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.989999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.990025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.990101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.990126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.990212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.990237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.990325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.990350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.990466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.990491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.990606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.990634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.990742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.990768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.990852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.990880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.990963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.990989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.991102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.991129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.991243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.991269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.991387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.991412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.991519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.991566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.991693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.991722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.991808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.991834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.991910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.991936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.992023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.992048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.992157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.992183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.992320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.992346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.992464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.992494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.992594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.992621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.992705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.992731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.992847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.992896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.993038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.993085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.993218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.993266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.993348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.993374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.993517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.993574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.993694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.993722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.993833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.993859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.993983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.716 [2024-12-10 04:14:03.994033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.716 qpair failed and we were unable to recover it. 00:26:09.716 [2024-12-10 04:14:03.994131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.994158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.994235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.994261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.994376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.994401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.994509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.994534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.994700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.994728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.994821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.994849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.994960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.994987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.995092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.995118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.995204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.995230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.995339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.995364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.995460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.995487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.995583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.995615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.995702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.995729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.995845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.995871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.995963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.995989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.996117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.996157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.996280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.996309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.996398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.996425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.996571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.996598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.996708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.996734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.996809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.996835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.996947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.996973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.997060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.997085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.997203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.997239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.997321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.997347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.997456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.997481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.997564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.997592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.997731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.997757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.997870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.997897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.998010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.998037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.998117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.998142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.998254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.998280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.998392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.998418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.998534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.998573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.998700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.998727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.998814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.998841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.998959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.998985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.999100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.717 [2024-12-10 04:14:03.999139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.717 qpair failed and we were unable to recover it. 00:26:09.717 [2024-12-10 04:14:03.999263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:03.999292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:03.999430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:03.999456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:03.999569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:03.999595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:03.999700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:03.999725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:03.999837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:03.999864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.000002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.000027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.000142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.000167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.000273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.000299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.000415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.000440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.000564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.000593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.000676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.000702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.000792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.000818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.000966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.000993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.001079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.001106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.001215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.001242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.001323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.001349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.001461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.001488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.001605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.001632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.001713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.001739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.001818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.001845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.001959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.001986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.002065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.002091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.002210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.002236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.002346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.002374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.002458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.002483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.002571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.002602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.002685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.002713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.002831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.002857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.002947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.002973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.003091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.003118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.003219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.003247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.003350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.003390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.003497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.003537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.003704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.003732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.003822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.003850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.003964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.003991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.004076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.004102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.004210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.004237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.004365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.004405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.004532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.004568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.004713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.004740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.004851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.004877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.718 qpair failed and we were unable to recover it. 00:26:09.718 [2024-12-10 04:14:04.004955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.718 [2024-12-10 04:14:04.004981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.005075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.005101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.005215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.005240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.005348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.005373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.005488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.005513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.005630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.005656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.005765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.005790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.005886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.005911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.006024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.006049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.006156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.006182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.006291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.006320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.006397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.006426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.006525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.006570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.006672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.006699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.006788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.006814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.006908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.006934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.007016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.007042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.007158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.007185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.007300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.007326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.007413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.007439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.007556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.007582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.007667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.007693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.007774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.007800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.007879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.007905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.008027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.008052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.008127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.008154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.008274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.008302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.008424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.008463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.008583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.008612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.008696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.008723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.008864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.008891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.009006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.009032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.009143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.009169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.009246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.009272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.009384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.009411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.009523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.719 [2024-12-10 04:14:04.009561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.719 qpair failed and we were unable to recover it. 00:26:09.719 [2024-12-10 04:14:04.009657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.009682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.009800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.009826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.009996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.010042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.010122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.010148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.010292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.010333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.010455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.010483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.010578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.010605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.010687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.010713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.010841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.010875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.010990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.011023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.011187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.011233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.011324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.011349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.011427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.011453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.011530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.011563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.011680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.011710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.011788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.011813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.011921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.011947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.012059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.012084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.012160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.012185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.012274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.012304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.012379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.012405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.012510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.012536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.012628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.012654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.012724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.012749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.012888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.012914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.013026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.013051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.013192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.013217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.013305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.013331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.720 qpair failed and we were unable to recover it. 00:26:09.720 [2024-12-10 04:14:04.013421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.720 [2024-12-10 04:14:04.013447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.013561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.013589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.013677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.013702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.013784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.013809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.013919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.013944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.014056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.014082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.014202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.014227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.014307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.014332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.014420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.014446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.014568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.014595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.014705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.014731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.014811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.014837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.014947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.014973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.015063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.015088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.015234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.015260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.015415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.015455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.015580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.015610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.015728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.015754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.015842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.015870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.015981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.016007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.016123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.016150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.016257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.016284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.016395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.016421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.016535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.016569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.016658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.016683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.016790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.016815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.016923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.016949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.017092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.017118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.017206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.017232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.017305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.017331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.017420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.017446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.017538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.017573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.017692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.017718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.017839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.017865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.017958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.017983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.018068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.018095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.018184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.018210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.018323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.018350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.018436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.721 [2024-12-10 04:14:04.018463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.721 qpair failed and we were unable to recover it. 00:26:09.721 [2024-12-10 04:14:04.018540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.018574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.018653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.018685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.018798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.018826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.018909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.018936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.019025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.019051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.019137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.019164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.019277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.019303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.019431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.019470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.019590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.019619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.019705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.019730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.019832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.019857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.019941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.019966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.020077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.020103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.020189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.020215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.020331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.020357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.020499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.020524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.020619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.020645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.020741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.020766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.020847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.020872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.020948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.020974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.021085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.021111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.021224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.021250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.021331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.021356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.021442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.021468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.021553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.021579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.021664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.021690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.021794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.021819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.021933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.021958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.022038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.022068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.022181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.022206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.022319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.022345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.022433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.022458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.022567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.022608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.022696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.022723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.022813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.022839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.022922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.022948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.023030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.023056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.023140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.023166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.023275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.722 [2024-12-10 04:14:04.023301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.722 qpair failed and we were unable to recover it. 00:26:09.722 [2024-12-10 04:14:04.023394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.023419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.023538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.023569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.023689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.023715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.023827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.023853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.023979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.024005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.024125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.024154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.024260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.024287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.024371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.024396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.024502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.024528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.024645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.024673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.024785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.024810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.024953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.024979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.025069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.025095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.025175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.025202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.025325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.025351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.025465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.025491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.025579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.025610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.025697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.025724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.025841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.025866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.025954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.025980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.026066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.026092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.026207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.026239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.026348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.026378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.026494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.026521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.026613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.026639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.026754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.026780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.026887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.026912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.027001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.027028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.027135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.027162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.027284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.027309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.027397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.027423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.027511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.027536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.027685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.027710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.027813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.027838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.027951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.027976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.028054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.028080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.028188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.028213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.028330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.028355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.028477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.028506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.723 [2024-12-10 04:14:04.028605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.723 [2024-12-10 04:14:04.028633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.723 qpair failed and we were unable to recover it. 00:26:09.724 [2024-12-10 04:14:04.028745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.724 [2024-12-10 04:14:04.028770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.724 qpair failed and we were unable to recover it. 00:26:09.724 [2024-12-10 04:14:04.028908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.724 [2024-12-10 04:14:04.028933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:09.724 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.029049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.029075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.029161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.029191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.029279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.029305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.029410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.029436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.029520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.029555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.029689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.029715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.029853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.029879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.030014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.030040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.030137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.030162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.030245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.030270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.030361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.030400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.030515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.030543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.030685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.030719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.030822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.030857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.030986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.031013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.031113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.031140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.031231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.031257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.031364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.031390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.031509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.031536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.031664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.031691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.031778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.031806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.031906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.010 [2024-12-10 04:14:04.031933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.010 qpair failed and we were unable to recover it. 00:26:10.010 [2024-12-10 04:14:04.032070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.032097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.032212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.032239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.032358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.032385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.032499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.032528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.032664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.032692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.032789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.032815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.032904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.032942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.033024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.033051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.033163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.033190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.033302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.033334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.033428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.033454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.033566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.033594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.033719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.033746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.033897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.033923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.034019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.034051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.034165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.034192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.034287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.034313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.034398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.034426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.034540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.034577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.034665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.034691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.034789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.034820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.034932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.034959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.035045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.035072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.035161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.035188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.035269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.035295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.035380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.035406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.035519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.035556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.035670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.035696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.035782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.035808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.035928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.035955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.036063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.036089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.036208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.036235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.036325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.036354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.036518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.036567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.036704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.036732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.036841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.036868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.036990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.037038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.037166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.037193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.037275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.011 [2024-12-10 04:14:04.037302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.011 qpair failed and we were unable to recover it. 00:26:10.011 [2024-12-10 04:14:04.037381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.037408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.037523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.037559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.037651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.037678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.037764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.037790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.037880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.037913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.037993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.038019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.038106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.038133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.038212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.038243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.038324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.038351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.038463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.038490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.038617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.038656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.038779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.038805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.038915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.038941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.039018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.039044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.039132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.039157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.039267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.039293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.039406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.039432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.039559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.039585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.039666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.039691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.039782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.039808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.039899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.039924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.040048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.040074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.040164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.040190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.040268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.040293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.040376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.040402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.040485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.040511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.040609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.040634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.040714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.040739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.040820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.040845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.040941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.040968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.041073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.041099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.041189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.041214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.041288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.041314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.041430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.041456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.041553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.041587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.041705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.041732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.041853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.041880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.042001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.042028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.042137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.012 [2024-12-10 04:14:04.042164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.012 qpair failed and we were unable to recover it. 00:26:10.012 [2024-12-10 04:14:04.042288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.042314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.042395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.042422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.042540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.042574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.042689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.042716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.042828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.042853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.042934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.042959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.043044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.043070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.043156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.043182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.043318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.043343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.043469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.043495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.043580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.043606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.043699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.043724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.043839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.043864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.043950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.043975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.044115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.044141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.044252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.044277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.044362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.044388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.044465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.044491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.044611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.044656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.044758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.044787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.044899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.044926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.045043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.045075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.045197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.045229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.045345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.045378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.045500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.045527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.045667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.045693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.045785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.045813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.045900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.045926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.046029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.046055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.046133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.046159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.046245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.046270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.046349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.046374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.046452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.046478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.046557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.046583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.046693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.046718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.046802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.046827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.046914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.046940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.047028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.047054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.047136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.047161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.013 qpair failed and we were unable to recover it. 00:26:10.013 [2024-12-10 04:14:04.047252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.013 [2024-12-10 04:14:04.047278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.047390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.047415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.047493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.047519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.047620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.047646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.047731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.047756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.047873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.047898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.047972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.047997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.048078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.048104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.048177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.048203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.048291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.048316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.048430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.048459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.048589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.048628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.048752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.048787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.048910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.048937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.049026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.049052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.049140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.049167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.049285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.049310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.049422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.049447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.049537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.049568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.049655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.049680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.049788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.049814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.049896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.049921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.050001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.050026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.050140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.050165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.050287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.050312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.050426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.050451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.050528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.050559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.050646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.050671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.050758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.050783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.050861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.050886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.050971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.050996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.051069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.051094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.051177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.051202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.014 qpair failed and we were unable to recover it. 00:26:10.014 [2024-12-10 04:14:04.051310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.014 [2024-12-10 04:14:04.051335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.051416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.051441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.051534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.051564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.051641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.051667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.051753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.051782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.051898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.051923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.052002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.052028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.052141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.052167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.052241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.052266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.052352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.052378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.052463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.052489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.052585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.052611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.052700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.052726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.052816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.052841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.052935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.052961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.053052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.053078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.053189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.053215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.053321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.053347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.053434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.053459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.053540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.053571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.053652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.053679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.053765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.053790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.053871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.053896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.054009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.054034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.054115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.054139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.054218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.054243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.054351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.054376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.054467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.054492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.054617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.054643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.054727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.054753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.054830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.054855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.054953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.054983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.055066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.055092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.055201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.055226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.055303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.055328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.055407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.055433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.055555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.055581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.055658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.055684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.055796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.055821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.015 [2024-12-10 04:14:04.055933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.015 [2024-12-10 04:14:04.055958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.015 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.056055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.056080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.056168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.056194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.056307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.056332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.056417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.056442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.056564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.056590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.056679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.056704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.056818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.056844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.056930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.056955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.057049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.057074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.057191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.057216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.057345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.057389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.057500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.057541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.057648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.057677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.057799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.057826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.057940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.057966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.058054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.058081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.058191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.058217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.058331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.058357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.058446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.058477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.058589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.058615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.058706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.058731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.058811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.058836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.058945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.058970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.059077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.059102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.059197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.059222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.059318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.059343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.059421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.059447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.059568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.059614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.059713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.059742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.059831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.059858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.059939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.059966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.060057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.060089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.060211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.060238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.060323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.060350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.060455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.060485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.060586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.060613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.060703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.060728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.060808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.060833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.016 [2024-12-10 04:14:04.060945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.016 [2024-12-10 04:14:04.060971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.016 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.061083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.061108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.061197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.061223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.061310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.061336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.061421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.061446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.061529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.061570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.061697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.061725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.061839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.061872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.061975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.062002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.062100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.062127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.062218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.062244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.062329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.062355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.062441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.062466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.062576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.062602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.062681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.062707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.062817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.062842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.062928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.062953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.063045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.063071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.063184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.063209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.063321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.063346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.063433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.063458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.063557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.063587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.063704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.063731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.063817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.063847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.063948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.063975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.064118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.064149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.064253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.064280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.064392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.064418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.064528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.064565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.064667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.064693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.064776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.064802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.064887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.064920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.065040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.065066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.065203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.065232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.065329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.065361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.065482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.065509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.065611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.065639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.065735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.065763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.065879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.065905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.017 [2024-12-10 04:14:04.065995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.017 [2024-12-10 04:14:04.066022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.017 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.066119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.066147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.066237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.066265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.066352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.066377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.066486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.066512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.066616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.066642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.066758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.066783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.066873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.066898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.066981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.067007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.067124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.067149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.067230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.067255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.067342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.067368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.067456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.067481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.067578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.067604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.067715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.067741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.067827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.067853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.067928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.067953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.068031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.068057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.068195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.068221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.068308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.068333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.068418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.068445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.068558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.068584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.068668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.068698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.068819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.068845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.068933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.068958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.069039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.069064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.069173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.069198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.069293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.069318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.069403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.069428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.069505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.069530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.069614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.069639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.069729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.069754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.069841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.069866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.069970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.069995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.070103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.070128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.070242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.070267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.070354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.070380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.070486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.070511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.070602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.070628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.070716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.018 [2024-12-10 04:14:04.070741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.018 qpair failed and we were unable to recover it. 00:26:10.018 [2024-12-10 04:14:04.070818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.070843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.070952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.070977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.071066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.071091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.071175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.071200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.071314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.071339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.071417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.071442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.071532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.071566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.071676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.071701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.071790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.071815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.071903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.071932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.072038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.072064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.072136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.072161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.072271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.072297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.072382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.072408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.072505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.072551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.072652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.072681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.072790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.072818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.072915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.072942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.073031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.073058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.073147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.073179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.073302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.073329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.073459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.073500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.073604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.073631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.073730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.073756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.073905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.073930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.074063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.074110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.074211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.074243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.074398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.074424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.074534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.074565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.074652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.074678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.074763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.074789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.074902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.074927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.075042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.075067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.019 qpair failed and we were unable to recover it. 00:26:10.019 [2024-12-10 04:14:04.075150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.019 [2024-12-10 04:14:04.075175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.075287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.075312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.075396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.075422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.075502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.075531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.075628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.075653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.075742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.075768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.075852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.075878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.075991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.076016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.076126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.076151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.076236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.076261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.076346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.076371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.076460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.076485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.076597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.076623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.076740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.076765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.076852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.076877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.076955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.076981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.077094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.077119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.077203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.077229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.077348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.077374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.077456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.077481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.077564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.077595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.077682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.077716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.077811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.077839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.077925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.077951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.078036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.078070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.078180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.078207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.078299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.078325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.078407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.078434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.078529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.078561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.078656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.078682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.078763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.078793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.078883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.078908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.078992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.079017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.079090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.079115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.079227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.079252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.079359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.079384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.079478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.079512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.079654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.079682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.079798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.079825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.020 [2024-12-10 04:14:04.079920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.020 [2024-12-10 04:14:04.079948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.020 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.080041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.080068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.080158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.080184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.080295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.080324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.080431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.080457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.080603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.080630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.080715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.080742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.080863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.080889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.081004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.081036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.081119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.081146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.081264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.081290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.081396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.081429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.081558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.081586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.081670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.081696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.081787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.081818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.081946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.081973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.082087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.082119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.082240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.082267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.082368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.082399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.082489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.082516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.082616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.082643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.082739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.082765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.082878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.082905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.083015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.083041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.083161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.083188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.083294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.083323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.083476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.083502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.083624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.083652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.083741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.083767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.083850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.083876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.083992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.084021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.084143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.084169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.084265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.084298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.084417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.084445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.084561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.084588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.084680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.084708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.084817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.084844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.084961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.084994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.085112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.085139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.021 qpair failed and we were unable to recover it. 00:26:10.021 [2024-12-10 04:14:04.085254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.021 [2024-12-10 04:14:04.085280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.085405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.085435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.085528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.085564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.085677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.085704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.085815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.085841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.085950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.085976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.086068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.086095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.086220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.086247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.086366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.086392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.086485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.086512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.086603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.086630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.086744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.086770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.086898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.086925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.087009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.087036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.087118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.087144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.087247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.087274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.087362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.087390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.087500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.087532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.087647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.087674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.087819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.087855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.087971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.087998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.088082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.088108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.088188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.088215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.088343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.088371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.088481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.088507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.088601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.088631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.088734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.088761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.088872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.088899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.088996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.089029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.089139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.089179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.089308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.089335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.089452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.089477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.089576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.089603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.089731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.089756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.089869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.089895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.089985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.090011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.090090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.090115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.090291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.090332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.090503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.090528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.022 qpair failed and we were unable to recover it. 00:26:10.022 [2024-12-10 04:14:04.090675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.022 [2024-12-10 04:14:04.090702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.090795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.090821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.090922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.090947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.091032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.091058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.091194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.091220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.091368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.091400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.091504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.091529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.091628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.091653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.091736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.091763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.091880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.091906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.092020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.092045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.092160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.092214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.092340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.092372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.092483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.092515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.092660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.092686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.092796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.092821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.092928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.092953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.093034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.093059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.093140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.093166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.093279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.093310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.093442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.093479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.093609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.093636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.093747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.093772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.093910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.093936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.094021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.094046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.094151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.094183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.094446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.094502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.094670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.094695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.094807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.094832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.094964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.094997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.095093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.095124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.095236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.095282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.095453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.095485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.095631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.095656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.095753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.023 [2024-12-10 04:14:04.095778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.023 qpair failed and we were unable to recover it. 00:26:10.023 [2024-12-10 04:14:04.095914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.095939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.096051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.096077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.096201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.096247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.096363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.096388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.096496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.096522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.096634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.096661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.096780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.096806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.096919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.096945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.097057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.097084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.097177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.097203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.097293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.097318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.097426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.097452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.097561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.097601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.097690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.097718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.097864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.097889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.097982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.098014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.098165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.098197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.098322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.098354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.098464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.098490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.098578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.098604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.098740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.098768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.098882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.098908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.098990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.099016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.099100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.099127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.099219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.099244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.099332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.099365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.099476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.099502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.099620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.099646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.099765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.099791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.099879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.099905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.100004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.100030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.100171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.100203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.100345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.100388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.100506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.100538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.100656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.100682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.100772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.100797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.100922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.100949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.101035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.101061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.101159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.024 [2024-12-10 04:14:04.101185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.024 qpair failed and we were unable to recover it. 00:26:10.024 [2024-12-10 04:14:04.101304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.101352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.101461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.101486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.101602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.101628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.101743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.101769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.101850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.101875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.101962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.101989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.102105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.102131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.102220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.102246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.102336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.102362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.102477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.102504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.102610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.102637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.102750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.102776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.102861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.102888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.102977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.103009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.103181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.103231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.103345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.103377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.103504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.103530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.103654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.103680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.103793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.103818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.103908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.103933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.104024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.104052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.104141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.104168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.104256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.104282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.104391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.104417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.104491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.104516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.104636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.104662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.104755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.104782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.104874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.104900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.104988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.105014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.105097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.105122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.105238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.105263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.105348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.105374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.105490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.105515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.105608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.105633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.105748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.105773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.105858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.105883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.105970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.105995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.106098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.106132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.106231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.025 [2024-12-10 04:14:04.106265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.025 qpair failed and we were unable to recover it. 00:26:10.025 [2024-12-10 04:14:04.106407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.106440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.106575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.106614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.106715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.106743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.106838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.106865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.107000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.107032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.107188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.107232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.107319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.107344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.107465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.107495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.107600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.107627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.107742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.107768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.107858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.107884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.107971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.107997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.108081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.108106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.108226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.108254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.108334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.108360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.108447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.108473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.108594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.108621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.108704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.108730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.108821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.108848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.108924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.108949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.109034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.109058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.109168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.109199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.109307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.109339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.109449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.109482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.109632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.109658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.109747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.109773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.109878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.109911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.110046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.110078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.110218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.110251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.110389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.110432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.110556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.110583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.110699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.110725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.110818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.110844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.110932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.110958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.111126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.111158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.111249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.111282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.111382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.111441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.111608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.111637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.111737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.026 [2024-12-10 04:14:04.111770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.026 qpair failed and we were unable to recover it. 00:26:10.026 [2024-12-10 04:14:04.111894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.111921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.112057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.112105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.112258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.112317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.112413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.112439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.112524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.112559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.112689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.112716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.112803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.112829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.112917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.112944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.113040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.113067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.113152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.113179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.113280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.113307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.113392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.113417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.113505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.113531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.113635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.113661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.113754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.113779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.113870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.113895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.113989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.114015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.114122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.114148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.114233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.114259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.114341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.114367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.114488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.114514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.114615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.114641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.114737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.114763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.114843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.114869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.114958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.114984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.115092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.115117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.115231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.115258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.115435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.115497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.115597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.115625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.115718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.115744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.115844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.115876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.115981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.116006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.116109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.116140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.116296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.116341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.116420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.116445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.116559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.116587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.116677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.116704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.116817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.116843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.116933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.027 [2024-12-10 04:14:04.116958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.027 qpair failed and we were unable to recover it. 00:26:10.027 [2024-12-10 04:14:04.117071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.117097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.117202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.117227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.117340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.117366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.117446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.117476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.117576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.117604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.117719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.117745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.117848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.117893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.118032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.118078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.118169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.118194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.118282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.118307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.118400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.118424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.118532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.118563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.118676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.118701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.118789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.118814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.118899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.118924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.119006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.119031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.119119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.119144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.119259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.119285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.119397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.119423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.119557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.119597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.119720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.119747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.119862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.119888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.120002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.120028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.120117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.120143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.120253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.120279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.120391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.120417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.120533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.120566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.120653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.120679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.120789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.120815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.120953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.120979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.121069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.121099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.121186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.121212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.121327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.121360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.121456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.121500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.028 qpair failed and we were unable to recover it. 00:26:10.028 [2024-12-10 04:14:04.121588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.028 [2024-12-10 04:14:04.121615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.121705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.121731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.121845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.121870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.121971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.121997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.122093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.122120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.122237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.122265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.122404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.122461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.122568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.122597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.122687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.122714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.122801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.122833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.122983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.123030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.123117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.123143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.123242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.123270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.123369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.123396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.123511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.123538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.123635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.123661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.123745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.123770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.123882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.123907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.124053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.124099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.124229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.124275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.124354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.124379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.124465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.124490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.124572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.124598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.124700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.124730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.124876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.124902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.124987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.125012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.125098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.125123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.125204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.125229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.125312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.125337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.125417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.125443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.125532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.125563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.125645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.125670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.125756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.125781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.125892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.125917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.125997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.126023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.126105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.126130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.126236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.126261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.126384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.126409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.126516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.126565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-12-10 04:14:04.126671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.029 [2024-12-10 04:14:04.126698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.126786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.126813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.126902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.126928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.127049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.127074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.127163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.127189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.127306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.127333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.127419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.127446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.127534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.127580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.127684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.127711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.127799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.127825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.127937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.127963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.128049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.128077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.128216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.128263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.128353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.128378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.128483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.128509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.128608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.128634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.128716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.128742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.128824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.128849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.128972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.128997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.129081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.129107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.129217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.129243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.129321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.129347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.129431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.129457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.129537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.129570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.129666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.129693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.129773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.129799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.129918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.129944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.130021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.130048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.130162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.130189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.130283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.130310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.130424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.130450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.130556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.130582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.130668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.130694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.130811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.130837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.130922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.130949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.131023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.131065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.131200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.131233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.131360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.131385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.131542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.131575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.030 [2024-12-10 04:14:04.131711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.030 [2024-12-10 04:14:04.131737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.030 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.131825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.131850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.131934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.131960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.132073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.132099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.132216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.132242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.132331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.132356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.132466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.132492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.132606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.132633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.132748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.132776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.132881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.132907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.132996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.133028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.133196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.133243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.133329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.133359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.133467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.133493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.133604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.133629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.133714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.133739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.133818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.133843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.133943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.133968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.134084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.134109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.134189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.134214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.134302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.134329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.134435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.134461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.134566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.134593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.134721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.134747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.134870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.134896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.134981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.135008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.135105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.135131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.135246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.135272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.135381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.135408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.135532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.135563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.135642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.135667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.135784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.135810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.135938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.135971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.136107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.136139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.136263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.136296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.136394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.136427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.136540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.136573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.136666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.136693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.136787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.136813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.031 qpair failed and we were unable to recover it. 00:26:10.031 [2024-12-10 04:14:04.136928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.031 [2024-12-10 04:14:04.136958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.137077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.137110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.137218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.137251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.137382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.137414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.137556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.137603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.137718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.137746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.137854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.137886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.138012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.138058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.138165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.138211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.138348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.138374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.138483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.138509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.138596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.138622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.138711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.138736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.138814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.138841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.138927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.138953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.139047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.139073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.139149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.139175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.139291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.139317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.139407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.139431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.139517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.139542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.139640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.139665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.139779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.139828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.139935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.139982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.140091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.140138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.140285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.140335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.140450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.140475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.140607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.140633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.140726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.140752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.140896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.140921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.141007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.141032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.141148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.141174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.032 [2024-12-10 04:14:04.141261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.032 [2024-12-10 04:14:04.141285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.032 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.141375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.141401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.141489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.141515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.141612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.141638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.141754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.141779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.141864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.141890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.142007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.142032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.142153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.142178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.142262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.142287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.142376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.142401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.142556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.142585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.142670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.142697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.142804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.142830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.142941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.142967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.143051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.143078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.143190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.143216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.143307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.143333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.143448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.143474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.143587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.143614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.143698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.143723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.143804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.143829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.143936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.143962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.144073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.144106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.144218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.144250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.144389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.144421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.144523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.144564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.144681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.144709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.144840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.144885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.144998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.145044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.145180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.145225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.145310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.145335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.145446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.145471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.145575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.145601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.145689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.145715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.145797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.145822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.145909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.145934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.146050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.146076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.146170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.146195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.146301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.146326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.033 [2024-12-10 04:14:04.146407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.033 [2024-12-10 04:14:04.146432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.033 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.146524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.146553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.146632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.146658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.146748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.146774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.146884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.146909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.147020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.147045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.147146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.147171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.147249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.147274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.147387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.147412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.147502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.147528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.147643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.147668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.147778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.147816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.147910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.147937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.148036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.148062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.148149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.148176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.148285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.148312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.148395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.148421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.148535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.148568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.148686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.148712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.148835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.148867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.148971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.149003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.149152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.149186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.149297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.149330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.149476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.149503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.149624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.149654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.149799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.149851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.149962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.150008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.150144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.150190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.150304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.150329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.150417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.150443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.150564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.150590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.150703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.150728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.150827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.150852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.150936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.150961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.151077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.151102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.151183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.151208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.151328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.151353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.151438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.151463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.151554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.151580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.151661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.034 [2024-12-10 04:14:04.151686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.034 qpair failed and we were unable to recover it. 00:26:10.034 [2024-12-10 04:14:04.151801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.151826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.151905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.151930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.152040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.152065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.152158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.152184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.152271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.152297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.152383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.152408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.152517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.152542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.152656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.152681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.152773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.152799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.152884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.152909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.152993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.153019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.153098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.153124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.153206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.153231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.153318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.153343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.153459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.153485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.153565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.153591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.153689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.153714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.153793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.153819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.153898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.153923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.154019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.154045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.154124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.154150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.154238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.154263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.154378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.154403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.154493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.154519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.154665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.154690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.154777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.154807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.154891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.154917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.154993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.155018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.155103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.155128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.155230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.155255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.155338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.155363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.155449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.155475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.155587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.155613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.155699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.155724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.155814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.155839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.155980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.156005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.156094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.156119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.156212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.156237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.156326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.035 [2024-12-10 04:14:04.156351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.035 qpair failed and we were unable to recover it. 00:26:10.035 [2024-12-10 04:14:04.156442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.156468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.156578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.156605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.156694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.156720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.156826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.156851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.156971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.156997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.157117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.157143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.157232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.157257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.157334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.157359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.157432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.157457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.157539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.157572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.157685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.157712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.157802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.157827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.157915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.157941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.158077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.158108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.158205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.158230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.158314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.158339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.158457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.158482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.158569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.158595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.158687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.158713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.158794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.158820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.158932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.158958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.159046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.159072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.159193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.159218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.159309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.159334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.159415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.159440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.159521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.159552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.159634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.159659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.159774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.159800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.159880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.159906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.160018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.160042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.160135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.160160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.160272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.160298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.160385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.160410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.160500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.160525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.160616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.160641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.160731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.160757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.160837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.160863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.160980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.161005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.161096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.161122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.036 [2024-12-10 04:14:04.161234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.036 [2024-12-10 04:14:04.161260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.036 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.161351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.161380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.161464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.161490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.161599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.161624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.161716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.161742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.161827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.161853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.161963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.161988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.162103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.162128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.162215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.162240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.162328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.162354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.162466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.162491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.162574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.162600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.162684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.162709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.162797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.162822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.162906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.162933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.163042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.163068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.163150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.163175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.163291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.163317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.163400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.163426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.163508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.163534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.163625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.163650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.163732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.163757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.163837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.163862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.163944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.163969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.164078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.164103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.164186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.164211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.164288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.164314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.164396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.164421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.164504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.164530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.164626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.164651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.164731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.164757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.164844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.164869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.164976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.165002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.165114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.165139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.037 qpair failed and we were unable to recover it. 00:26:10.037 [2024-12-10 04:14:04.165243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.037 [2024-12-10 04:14:04.165269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.165353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.165379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.165494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.165519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.165628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.165668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.165758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.165786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.165901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.165928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.166025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.166051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.166161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.166187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.166326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.166368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.166498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.166526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.166668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.166711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.166828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.166881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.167018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.167067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.167181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.167206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.167296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.167321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.167410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.167436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.167515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.167540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.167668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.167693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.167774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.167799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.167882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.167907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.167995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.168020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.168128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.168153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.168270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.168296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.168383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.168408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.168489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.168514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.168609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.168635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.168719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.168745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.168858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.168883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.168978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.169003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.169088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.169114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.169225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.169251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.169361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.169386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.169466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.169491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.169603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.169628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.169705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.169730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.169826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.169865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.169962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.169992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.170080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.170108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.170223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.170249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.038 qpair failed and we were unable to recover it. 00:26:10.038 [2024-12-10 04:14:04.170330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.038 [2024-12-10 04:14:04.170357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.170493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.170532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.170640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.170668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.170786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.170814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.170928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.170960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.171062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.171095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.171232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.171264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.171385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.171418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.171537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.171569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.171660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.171690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.171797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.171829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.171951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.171999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.172173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.172220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.172305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.172331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.172421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.172446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.172575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.172602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.172681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.172707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.172785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.172810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.172905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.172931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.173007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.173033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.173125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.173151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.173241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.173267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.173377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.173402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.173523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.173570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.173670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.173698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.173816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.173844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.173936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.173964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.174078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.174104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.174188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.174214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.174302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.174328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.174419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.174445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.174534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.174568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.174682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.174708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.174804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.174837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.174972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.175018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.175112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.175144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.175243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.175273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.175355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.175380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.175476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.175502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.039 [2024-12-10 04:14:04.175622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.039 [2024-12-10 04:14:04.175648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.039 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.175767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.175792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.175905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.175931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.176024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.176049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.176133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.176159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.176245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.176270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.176359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.176384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.176474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.176499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.176590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.176617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.176705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.176731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.176822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.176847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.176933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.176959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.177036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.177061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.177150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.177176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.177267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.177293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.177374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.177399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.177480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.177506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.177618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.177657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.177756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.177784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.177868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.177896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.177984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.178010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.178097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.178123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.178236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.178262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.178353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.178379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.178502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.178533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.178633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.178661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.178749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.178775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.178865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.178890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.179006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.179031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.179153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.179179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.179258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.179283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.179396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.179422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.179507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.179533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.179630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.179659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.179776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.179802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.179917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.179943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.180026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.180052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.180128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.180154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.180269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.180295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.040 [2024-12-10 04:14:04.180406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.040 [2024-12-10 04:14:04.180432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.040 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.180523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.180555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.180652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.180679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.180792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.180817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.180906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.180932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.181037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.181063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.181232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.181278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.181381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.181412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.181501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.181529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.181634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.181667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.181763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.181790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.181908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.181942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.182053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.182094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.182282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.182331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.182446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.182480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.182603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.182629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.182743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.182769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.182884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.182910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.182991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.183016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.183131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.183164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.183274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.183309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.183408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.183440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.183557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.183602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.183687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.183714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.183827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.183853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.183974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.184000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.184096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.184123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.184242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.184303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.184398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.184426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.184510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.184536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.184654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.184680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.184784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.184816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.184920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.184945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.185031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.185057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.185136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.041 [2024-12-10 04:14:04.185162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.041 qpair failed and we were unable to recover it. 00:26:10.041 [2024-12-10 04:14:04.185278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.185305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.185427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.185453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.185568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.185594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.185673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.185698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.185819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.185846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.185953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.185979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.186060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.186086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.186167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.186192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.186284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.186310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.186386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.186412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.186496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.186522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.186619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.186645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.186760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.186785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.186868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.186893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.186984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.187010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.187124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.187149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.187255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.187280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.187358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.187383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.187495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.187520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.187620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.187646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.187732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.187757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.187866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.187891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.187981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.188007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.188090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.188117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.188227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.188253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.188329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.188354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.188435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.188461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.188542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.188574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.188655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.188680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.188754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.188779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.188865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.188890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.188999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.189029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.189111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.189137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.189252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.189278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.189367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.189392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.189469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.189494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.189584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.189611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.189697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.189723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.189813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.189838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.042 qpair failed and we were unable to recover it. 00:26:10.042 [2024-12-10 04:14:04.189948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.042 [2024-12-10 04:14:04.189973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.190063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.190088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.190166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.190192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.190301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.190327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.190446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.190472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.190559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.190585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.190668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.190693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.190771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.190796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.190902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.190928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.191015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.191040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.191154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.191179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.191273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.191298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.191382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.191407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.191523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.191553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.191665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.191690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.191772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.191797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.191906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.191932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.192026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.192052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.192136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.192162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.192286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.192318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.192440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.192479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.192573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.192602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.192717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.192745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.192836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.192861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.192951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.192984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.193065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.193091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.193210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.193236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.193383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.193412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.193517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.193551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.193659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.193691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.193810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.193841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.193957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.193990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.194139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.194183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.194302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.194327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.194407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.194433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.194513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.194538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.194661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.194687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.194770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.194796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.194870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.194895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.043 [2024-12-10 04:14:04.195016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.043 [2024-12-10 04:14:04.195042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.043 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.195177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.195202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.195290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.195315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.195397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.195422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.195551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.195577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.195670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.195695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.195809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.195834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.195910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.195939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.196030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.196056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.196136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.196162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.196275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.196300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.196401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.196426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.196503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.196529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.196660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.196686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.196765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.196790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.196899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.196924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.197040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.197066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.197178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.197203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.197291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.197316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.197435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.197460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.197535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.197568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.197664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.197689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.197801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.197825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.197910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.197935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.198021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.198046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.198127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.198151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.198246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.198271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.198354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.198379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.198462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.198487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.198609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.198639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.198736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.198762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.198848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.198875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.198955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.198983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.199073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.199100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.199213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.199245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.199336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.199369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.199459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.199486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.199574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.199601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.199715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.199740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.199832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.044 [2024-12-10 04:14:04.199857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.044 qpair failed and we were unable to recover it. 00:26:10.044 [2024-12-10 04:14:04.199985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.200011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.200110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.200157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.200244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.200269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.200354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.200380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.200484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.200509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.200602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.200627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.200738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.200763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.200852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.200878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.200997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.201022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.201100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.201125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.201243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.201268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.201381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.201406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.201484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.201509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.201616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.201642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.201716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.201741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.201852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.201877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.201992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.202017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.202097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.202123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.202208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.202234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.202312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.202337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.202432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.202457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.202573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.202603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.202716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.202740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.202823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.202848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.202930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.202955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.203068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.203093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.203187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.203212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.203298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.203323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.203406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.203431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.203517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.203542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.203643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.203669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.203760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.203785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.203904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.203930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.204011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.204036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.204144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.204169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.204262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.204287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.204370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.204395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.204472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.204498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.204589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.204615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.045 qpair failed and we were unable to recover it. 00:26:10.045 [2024-12-10 04:14:04.204704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.045 [2024-12-10 04:14:04.204730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.204846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.204872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.204977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.205002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.205078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.205103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.205223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.205248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.205339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.205364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.205476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.205501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.205588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.205614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.205705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.205730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.205824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.205849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.205931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.205956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.206040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.206066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.206156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.206181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.206270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.206295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.206406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.206431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.206549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.206575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.206657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.206682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.206764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.206789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.206908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.206933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.207022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.207047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.207154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.207179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.207326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.207351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.207433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.207458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.207556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.207583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.207664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.207689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.207769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.207794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.207876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.207901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.207986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.208011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.208094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.208119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.208198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.208223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.208333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.208358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.208431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.208456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.208541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.208573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.208690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.208716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.208824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.046 [2024-12-10 04:14:04.208850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.046 qpair failed and we were unable to recover it. 00:26:10.046 [2024-12-10 04:14:04.208975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.209000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.209109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.209134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.209215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.209240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.209321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.209346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.209422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.209447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.209535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.209586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.209706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.209730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.209819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.209843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.209921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.209946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.210034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.210061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.210145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.210170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.210261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.210286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.210375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.210401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.210494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.210520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.210643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.210670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.210756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.210785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.210871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.210896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.210979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.211003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.211103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.211128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.211205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.211230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.211327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.211352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.211465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.211490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.211611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.211637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.211724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.211749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.211834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.211859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.211967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.211993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.212105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.212131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.212267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.212293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.212412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.212438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.212562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.212587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.212679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.212704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.212783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.212809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.212889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.212915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.212992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.213016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.213132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.213157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.213241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.213265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.213343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.213368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.213455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.213479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.213563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.213589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.047 qpair failed and we were unable to recover it. 00:26:10.047 [2024-12-10 04:14:04.213702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-12-10 04:14:04.213727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.213809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.213835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.213922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.213948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.214027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.214056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.214171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.214196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.214304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.214330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.214416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.214441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.214518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.214542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.214697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.214723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.214812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.214837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.214947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.214972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.215088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.215114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.215228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.215253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.215344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.215369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.215475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.215501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.215585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.215611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.215697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.215722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.215819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.215843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.215930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.215954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.216066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.216091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.216182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.216206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.216317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.216342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.216436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.216460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.216542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.216574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.216656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.216682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.216772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.216798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.216874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.216898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.216981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.217005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.217088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.217112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.217220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.217245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.217333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.217358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.217451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.217475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.217626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.217651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.217726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.217751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.217830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.217855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.217940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.217964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.218044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.218070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.218151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.218175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.218262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.218287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.048 [2024-12-10 04:14:04.218416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.048 [2024-12-10 04:14:04.218441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.048 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.218563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.218589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.218708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.218733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.218827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.218852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.218968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.218994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.219102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.219141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.219266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.219301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.219419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.219446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.219570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.219598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.219719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.219746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.219829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.219854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.219938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.219963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.220057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.220081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.220170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.220194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.220277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.220301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.220386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.220410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.220492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.220516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.220605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.220630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.220742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.220767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.220884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.220908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.221025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.221050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.221159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.221184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.221314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.221354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.221453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.221481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.221598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.221625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.221737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.221763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.221876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.221902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.222016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.222041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.222122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.222148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.222264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.222290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.222424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.222451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.222543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.222574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.222683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.222714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.222873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.222917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.223042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.223087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.223188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.223218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.223317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.223341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.223453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.223478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.223558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.223583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.223672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.049 [2024-12-10 04:14:04.223697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.049 qpair failed and we were unable to recover it. 00:26:10.049 [2024-12-10 04:14:04.223776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.223801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.223912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.223936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.224022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.224047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.224162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.224188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.224275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.224300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.224382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.224406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.224500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.224525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.224609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.224634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.224748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.224773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.224855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.224880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.224962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.224986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.225073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.225097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.225187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.225212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.225325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.225350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.225455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.225495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.225592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.225621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.225713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.225741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.225831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.225857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.225974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.226000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.226121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567f30 is same with the state(6) to be set 00:26:10.050 [2024-12-10 04:14:04.226297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.226342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.226443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.226473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.226600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.226628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.226759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.226804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.226909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.226938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.227093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.227138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.227243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.227277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.227386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.227418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.227522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.227565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.227743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.227775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.227866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.227898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.228035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.228066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.228160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.228191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.228296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.228327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.050 [2024-12-10 04:14:04.228459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.050 [2024-12-10 04:14:04.228492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.050 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.228611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.228639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.228774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.228822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.228929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.228959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.229054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.229078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.229205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.229249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.229361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.229385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.229469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.229497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.229658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.229700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.229839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.229873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.229989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.230015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.230163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.230196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.230305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.230343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.230480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.230519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.230645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.230672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.230783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.230809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.230896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.230923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.231036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.231067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.231165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.231197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.231302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.231333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.231475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.231504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.231625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.231652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.231781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.231811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.231928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.231961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.232138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.232182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.232289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.232319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.232450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.232477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.232562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.232590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.232675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.232701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.232819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.232850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.232984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.233016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.233121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.233153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.233252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.233285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.233381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.233413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.233555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.233600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.233721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.233748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.233892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.233935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.234066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.234096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.234194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.234219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.234293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.051 [2024-12-10 04:14:04.234323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.051 qpair failed and we were unable to recover it. 00:26:10.051 [2024-12-10 04:14:04.234466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.234491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.234603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.234630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.234715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.234741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.234854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.234881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.234982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.235013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.235168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.235199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.235331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.235362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.235502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.235528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.235623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.235650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.235766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.235793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.235937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.235967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.236094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.236125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.236247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.236277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.236407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.236439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.236568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.236612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.236717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.236743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.236856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.236883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.236989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.237018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.237176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.237219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.237379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.237423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.237508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.237533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.237625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.237650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.237786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.237819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.237943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.237975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.238075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.238107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.238244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.238277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.238379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.238412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.238526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.238565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.238666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.238692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.238806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.238833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.238920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.238946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.239062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.239094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.239254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.239286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.239424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.239457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.239594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.239620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.239758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.239784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.239871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.239898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.240035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.240084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.052 [2024-12-10 04:14:04.240189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.052 [2024-12-10 04:14:04.240220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.052 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.240379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.240434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.240564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.240612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.240703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.240729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.240840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.240882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.241031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.241064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.241196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.241229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.241350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.241376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.241503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.241531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.241628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.241654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.241748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.241773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.241885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.241911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.241992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.242018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.242137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.242163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.242273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.242324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.242451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.242475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.242588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.242614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.242716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.242763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.242869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.242913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.242999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.243024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.243106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.243134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.243216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.243241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.243356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.243381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.243470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.243496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.243609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.243635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.243739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.243765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.243928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.243961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.244098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.244130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.244276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.244319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.244465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.244492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.244585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.244611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.244698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.244723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.244859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.244904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.245014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.245060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.245164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.245211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.245299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.245324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.245438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.245463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.245552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.245581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.245723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.245749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.245830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.245855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.245972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.245997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.053 [2024-12-10 04:14:04.246084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.053 [2024-12-10 04:14:04.246109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.053 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.246227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.246252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.246341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.246368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.246478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.246503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.246625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.246652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.246732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.246758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.246868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.246893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.246973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.246998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.247081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.247107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.247212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.247237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.247341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.247366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.247446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.247471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.247566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.247596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.247688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.247714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.247795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.247828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.247939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.247964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.248043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.248069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.248157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.248182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.248293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.248320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.248405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.248431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.248601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.248639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.248823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.248857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.249024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.249057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.249193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.249228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.249366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.249400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.249536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.249593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.249763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.249795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.249961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.249992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.250100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.250135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.250243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.250274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.250408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.250439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.250571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.250614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.250725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.250750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.250859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.250886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.250999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.251024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.251138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.251170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.251301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.251333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.251430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.251461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.251572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.251599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.251732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.251757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.251860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.251893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.252006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.252037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.054 qpair failed and we were unable to recover it. 00:26:10.054 [2024-12-10 04:14:04.252173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.054 [2024-12-10 04:14:04.252204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.252333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.252366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.252474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.252502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.252618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.252644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.252759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.252786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.252890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.252921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.253046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.253093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.253183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.253208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.253293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.253319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.253440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.253465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.253570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.253598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.253684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.253710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.253791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.253821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.253934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.253960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.254045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.254071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.254159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.254188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.254326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.254358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.254477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.254504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.254626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.254653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.254742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.254776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.254885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.254912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.255018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.255044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.255197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.255235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.255406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.255439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.055 [2024-12-10 04:14:04.255597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.055 [2024-12-10 04:14:04.255625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.055 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.255769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.255794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.255884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.255909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.256024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.256050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.256190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.256215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.256327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.256359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.256492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.256523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.256693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.256718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.256808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.256833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.256972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.256996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.257128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.257160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.257268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.257317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.257434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.257469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.257617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.257645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.257778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.257806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.257917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.257945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.258029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.258054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.258164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.258196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.258355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.258386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.258515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.258552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.258655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.258680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.258818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.258843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.258930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.258954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.259082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.259129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.259319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.259355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.259473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.259509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.259639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.259666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.259779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.259805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.259926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.259974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.260140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.260174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.260331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.260363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.260469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.260503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.260684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.260711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.260803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.260847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.261023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.261056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.261193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.261225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.261364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.261399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.261509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.261541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.261664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.056 [2024-12-10 04:14:04.261689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.056 qpair failed and we were unable to recover it. 00:26:10.056 [2024-12-10 04:14:04.261802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.261828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.261939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.261965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.262106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.262169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.262286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.262321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.262421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.262446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.262567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.262594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.262684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.262710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.262818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.262844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.262957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.262982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.263068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.263095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.263173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.263198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.263283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.263310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.263433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.263458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.263552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.263580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.263669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.263694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.263812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.263838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.263974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.264007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.264127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.264154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.264242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.264268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.264358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.264384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.264465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.264490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.264626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.264653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.264738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.264763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.264839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.264864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.264949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.264974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.265053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.265078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.265162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.265189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.265306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.265331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.265416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.265442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.265556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.265597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.265738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.265769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.265876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.265907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.266016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.266049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.266155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.266188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.266324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.266357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.266457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.266482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.266597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.266627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.266754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.266781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.266916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.266950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.267089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.267123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.267260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.267292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.267410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.267438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.057 qpair failed and we were unable to recover it. 00:26:10.057 [2024-12-10 04:14:04.267556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.057 [2024-12-10 04:14:04.267584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.267675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.267700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.267787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.267812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.267970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.268001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.268102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.268133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.268264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.268297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.268413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.268440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.268562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.268589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.268673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.268698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.268798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.268830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.268973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.269018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.269135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.269180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.269297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.269322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.269432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.269458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.269542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.269573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.269689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.269714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.269827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.269853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.269962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.269988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.270068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.270093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.270221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.270246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.270328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.270354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.270428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.270454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.270539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.270570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.270713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.270738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.274660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.274701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.274815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.274843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.274952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.274984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.275145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.275191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.275337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.275383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.275467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.275493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.275574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.275600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.275706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.275752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.275863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.275889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.276005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.276029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.276116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.276140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.276231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.276255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.276362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.276387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.276500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.276526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.276651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.276692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.276797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.276827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.276946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.276973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.277050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.277076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.277201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.277229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.277339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.277365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.277453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.277480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.277584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.058 [2024-12-10 04:14:04.277617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.058 qpair failed and we were unable to recover it. 00:26:10.058 [2024-12-10 04:14:04.277712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.277739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.277845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.277878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.278012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.278047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.278212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.278245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.278422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.278455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.278579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.278619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.278738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.278765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.278876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.278912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.279092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.279127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.279308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.279342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.279470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.279503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.279626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.279651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.279733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.279759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.279877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.279911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.280057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.280092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.280213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.280245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.280386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.280419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.280537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.280567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.280674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.280699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.280815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.280840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.280937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.280982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.281142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.281175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.281309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.281348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.281465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.281490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.281574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.281601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.281720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.281746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.281830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.281856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.281970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.281995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.282076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.282101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.282197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.282236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.282382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.282431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.282526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.282564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.282684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.282710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.282843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.282890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.282991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.283024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.283133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.283157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.283250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.283275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.283363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.283387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.283528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.283563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.059 [2024-12-10 04:14:04.283684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.059 [2024-12-10 04:14:04.283711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.059 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.283815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.283841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.283931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.283956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.284073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.284098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.284189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.284214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.284389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.284436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.284552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.284579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.284695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.284720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.284872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.284920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.285063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.285097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.285208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.285246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.285375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.285401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.285510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.285534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.285639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.285671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.285778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.285805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.285922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.285947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.286081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.286107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.286229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.286261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.286393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.286426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.286570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.286613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.286760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.286795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.286905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.286938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.287096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.287130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.287237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.287271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.287448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.287481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.287627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.287654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.287768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.287793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.287903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.287928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.288042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.288067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.288182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.288215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.288350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.288385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.288556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.288601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.288681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.288707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.288859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.288885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.289019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.289044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.289189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.289221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.289327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.289358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.289564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.289622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.289765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.289804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.289919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.289952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.290105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.290148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.290258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.290309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.290423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.290448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.290564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.290590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.290678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.290703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.290813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.060 [2024-12-10 04:14:04.290838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.060 qpair failed and we were unable to recover it. 00:26:10.060 [2024-12-10 04:14:04.290920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.290945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.291053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.291078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.291163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.291188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.291318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.291357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.291461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.291504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.291620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.291649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.291794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.291827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.291919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.291946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.292033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.292060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.292158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.292191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.292351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.292390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.292534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.292591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.292700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.292733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.292895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.292932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.293062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.293096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.293221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.293255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.293399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.293436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.293626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.293654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.293753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.293781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.293913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.293957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.294089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.294136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.294235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.294268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.294403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.294430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.294515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.294540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.294660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.294687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.294833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.294866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.294999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.295031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.295130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.295164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.295282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.295313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.295440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.295472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.295618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.295645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.295763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.295794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.295886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.295912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.296001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.296025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.296165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.296212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.296345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.296391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.296502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.296526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.296619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.296645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.296756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.296806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.296886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.296911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.297046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.297071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.297180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.297225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.297311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.297336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.297425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.297449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.297565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.297591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.297736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.061 [2024-12-10 04:14:04.297761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.061 qpair failed and we were unable to recover it. 00:26:10.061 [2024-12-10 04:14:04.297849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.297873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.298010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.298035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.298117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.298143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.298228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.298255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.298347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.298374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.298467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.298497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.298648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.298684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.298829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.298865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.299051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.299094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.299238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.299288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.299397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.299423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.299534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.299572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.299651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.299682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.299821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.299846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.299976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.300008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.300118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.300147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.300235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.300263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.300373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.300411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.300526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.300560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.300676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.300701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.300817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.300842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.301018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.301050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.301158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.301189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.301322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.301353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.301495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.301520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.301615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.301641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.301727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.301752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.301861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.301887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.302007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.302031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.302112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.302136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.302246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.302277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.302370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.302403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.302513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.302552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.302665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.302691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.302795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.302821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.302934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.302960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.303080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.303141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.303256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.303304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.303393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.303418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.303560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.303589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.303709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.303735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.303841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.303867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.304030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.304063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.304178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.304212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.304351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.304383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.062 qpair failed and we were unable to recover it. 00:26:10.062 [2024-12-10 04:14:04.304608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.062 [2024-12-10 04:14:04.304636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.304803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.304847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.305007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.305054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.305200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.305248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.305336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.305361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.305475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.305499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.305643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.305689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.305823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.305854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.305980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.306004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.306119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.306144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.306237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.306262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.306347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.306372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.306491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.306516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.306604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.306629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.306740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.306764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.306854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.306879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.306963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.306989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.307123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.307147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.307231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.307255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.307360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.307385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.307489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.307514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.307661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.307700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.307821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.307847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.307958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.307985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.308121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.308148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.308234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.308260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.308357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.308396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.308483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.308510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.308633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.308659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.308768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.308794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.308880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.308905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.308984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.309010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.309128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.309156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.309255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.309287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.309441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.309474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.309564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.309592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.309719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.309746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.309834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.309860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.309975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.310001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.310092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.310117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.310228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.063 [2024-12-10 04:14:04.310253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.063 qpair failed and we were unable to recover it. 00:26:10.063 [2024-12-10 04:14:04.310366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.310390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.310499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.310524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.310678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.310703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.310790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.310815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.310902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.310926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.311035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.311060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.311195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.311220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.311319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.311359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.311482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.311510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.311640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.311670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.311794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.311826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.311958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.311991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.312116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.312151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.312267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.312299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.312442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.312475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.312594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.312623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.312736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.312762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.312854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.312880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.313024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.313059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.313212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.313245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.313384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.313423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.313553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.313612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.313734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.313762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.313903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.313929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.314012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.314039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.314157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.314218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.314390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.314437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.314524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.314555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.314674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.314702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.314855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.314901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.315003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.315034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.315145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.315171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.315313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.315338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.315454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.315479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.315582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.315611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.315735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.315761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.315847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.315873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.315966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.315992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.316079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.316106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.316189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.316218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.316336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.316369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.316520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.316564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.316703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.316729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.316853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.316896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.317042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.317076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.064 [2024-12-10 04:14:04.317194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.064 [2024-12-10 04:14:04.317228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.064 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.317351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.317379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.317516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.317554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.317645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.317670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.317799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.317830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.317952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.317984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.318106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.318155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.318236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.318261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.318355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.318380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.318465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.318489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.318608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.318636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.318751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.318778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.318934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.318960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.319048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.319075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.319188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.319214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.319306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.319335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.319469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.319496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.319607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.319633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.319714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.319739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.319852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.319877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.319991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.320015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.320105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.320130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.320250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.320276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.320365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.320390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.320514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.320541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.320639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.320665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.320775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.320800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.320910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.320935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.321049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.321076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.321199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.321241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.321365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.321392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.321486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.321515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.321634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.321662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.321759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.321787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.321889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.321922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.322032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.322067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.322232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.322266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.322449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.322494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.322583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.322611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.322731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.322757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.322870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.322896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.323036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.323071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.323206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.323238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.323357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.323389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.323495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.323522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.323639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.065 [2024-12-10 04:14:04.323666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.065 qpair failed and we were unable to recover it. 00:26:10.065 [2024-12-10 04:14:04.323782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.323808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.323953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.323978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.324065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.324091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.324175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.324201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.324310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.324337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.324419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.324444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.324562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.324588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.324668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.324693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.324826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.324871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.325001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.325032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.325184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.325218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.325386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.325435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.325575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.325620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.325725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.325758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.325886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.325919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.326056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.326108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.326246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.326295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.326402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.326435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.326594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.326623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.326711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.326738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.326863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.326890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.327039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.327073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.327191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.327240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.327412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.327457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.327644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.327672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.327769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.327797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.327882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.327909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.328017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.328051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.328170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.328217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.328359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.328391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.328500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.328534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.328702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.328728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.328812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.328838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.328976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.329001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.329128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.329172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.329332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.329364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.329472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.329503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.329661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.329686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.329799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.329824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.329928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.329953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.330114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.330146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.330250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.330281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.330416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.330447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.330604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.330643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.330746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.330772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.066 qpair failed and we were unable to recover it. 00:26:10.066 [2024-12-10 04:14:04.330910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-12-10 04:14:04.330941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.331047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.331072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.331206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.331250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.331335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.331361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.331443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.331471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.331583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.331609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.331694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.331720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.331849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.331880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.332014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.332045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.332164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.332197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.332313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.332346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.332444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.332475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.332616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.332642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.332731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.332756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.332871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.332897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.332975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.332999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.333141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.333188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.333325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.333358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.333461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.333491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.333579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.333605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.333716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.333761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.333849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.333874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.333980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.334026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.334140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.334166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.334249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.334274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.334371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.334397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.334476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.334500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.334599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.334626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.334771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.334795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.334917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.334943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.335033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.335058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.335142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.335167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.335263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.335288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.335393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.335419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.335492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.335517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.335619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.335645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.335783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.335808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.335920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.335945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.336039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.336064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.336153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-12-10 04:14:04.336179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.067 qpair failed and we were unable to recover it. 00:26:10.067 [2024-12-10 04:14:04.336294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.336319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.336432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.336457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.336543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.336578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.336663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.336688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.336776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.336800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.336919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.336945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.337064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.337089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.337173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.337198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.337278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.337303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.337414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.337440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.337555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.337581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.337660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.337686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.337819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.337860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.338007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.338036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.338123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.338148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.338241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.338267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.338358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.338383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.338473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.338498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.338611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.338638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.338777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.338833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.338981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.339021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.339145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.339192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.339301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.339350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.339442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.339467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.339591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.339622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.339756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.339780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.339867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.339892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.339974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.339999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.340103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.340127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.340208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.340233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.340315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.340340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.340451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.340476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.340568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.340594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.340688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.340712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.340830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.340860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.340949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.340975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.341084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.341110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.341193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.341218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.341305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.341331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.341411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.341436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.341515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.341541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.341697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.341727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.341828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.341858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.341988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.342017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.342157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.342186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.068 [2024-12-10 04:14:04.342284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-12-10 04:14:04.342334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.068 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.342474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.342502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.342613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.342652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.342786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.342831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.342962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.342995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.343177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.343211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.343356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.343388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.343491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.343522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.343657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.343697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.343842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.343888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.343999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.344043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.344179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.344210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.344336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.344361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.344449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.344474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.344573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.344620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.344733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.344759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.344874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.344901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.345050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.345077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.345158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.345185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.345273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.345307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.345391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.345417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.345502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.345529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.345637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.345677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.345818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.345865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.345977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.346022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.346110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.346135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.346239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.346286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.346371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.346396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.346513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.346538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.346643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.346673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.346762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.346787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.346866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.346892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.346968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.346994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.347101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.347127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.347214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.347243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.347326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.347353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.347443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.347467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.347559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.347585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.347698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.347723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.347830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.347856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.347941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.347966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.348052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.348081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.348172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.348196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.348285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.348310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.348402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.348427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.348514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.069 [2024-12-10 04:14:04.348540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.069 qpair failed and we were unable to recover it. 00:26:10.069 [2024-12-10 04:14:04.348657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.348681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.348798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.348823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.348908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.348933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.349067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.349092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.349185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.349214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.349303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.349330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.349448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.349474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.349566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.349592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.349672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.349696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.349792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.349820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.349910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.349937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.350080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.350105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.350211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.350243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.350343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.350373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.350482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.350513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.350699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.350745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.350877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.350924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.351087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.351133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.351269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.351315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.351432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.351457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.351570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.351596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.351690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.351717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.351828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.351862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.351942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.351968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.352050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.352075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.352184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.352211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.352316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.352341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.352446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.352473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.352581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.352620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.352762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.352794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.352977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.353012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.353177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.353211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.353362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.353399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.353515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.353541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.353659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.353686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.353810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.353858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.354021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.354054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.354189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.354222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.354320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.354353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.354500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.354527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.354640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.354668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.354758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.354783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.354870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.354896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.355008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.070 [2024-12-10 04:14:04.355033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.070 qpair failed and we were unable to recover it. 00:26:10.070 [2024-12-10 04:14:04.355169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.355202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.355381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.355413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.355526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.355591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.355726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.355753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.355869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.355895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.356022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.356068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.356182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.356214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.356329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.356355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.356501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.356536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.356691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.356728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.356850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.356876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.356966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.356991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.357098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.357123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.357222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.357254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.357413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.357453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.357592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.357620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.357707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.357733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.357844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.357871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.358008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.358049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.358185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.358225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.358370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.358404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.358536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.358594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.358704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.358730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.358817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.358843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.358926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.358951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.359040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.359081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.359212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.359247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.071 [2024-12-10 04:14:04.359385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.071 [2024-12-10 04:14:04.359418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.071 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.359530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.359589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.359690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.359718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.359808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.359833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.359999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.360046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.360162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.360195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.360320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.360345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.360430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.360458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.360568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.360597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.360707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.360733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.360892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.360952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.361108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.361145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.361329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.361369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.361492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.361519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.361633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.361660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.361773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.361803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.362014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.362048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.362159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.362192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.362349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.362396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.362524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.362565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.362666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.362693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.362773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.362799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.362925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.362953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.363069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.363096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.363236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.363268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.363502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.363536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.363651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.363678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.363767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.363793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.363980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.364007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.364124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.364171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.364338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.364374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.364495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.364527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.364641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.364679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.364778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.364805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.364925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.364953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.365038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.365063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.365141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.365184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.365301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.365350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.365453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.365485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.365615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.365642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.365736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.365762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.365841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.365867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.365973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.365997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.366144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.366175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.366278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.366310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.072 [2024-12-10 04:14:04.366459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.072 [2024-12-10 04:14:04.366490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.072 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.366656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.366683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.366791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.366829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.366912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.366938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.367038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.367071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.367167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.367198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.367325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.367358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.367477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.367515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.367673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.367700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.367814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.367839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.367922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.367947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.368052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.368085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.368214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.368241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.368351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.368379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.368469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.368495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.368593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.368625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.368732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.368758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.368871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.368898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.368988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.369016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.369148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.369180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.369283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.369315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.369415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.369447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.369583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.369609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.369723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.369749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.369860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.369887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.370017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.370050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.370160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.370199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.370345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.370381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.370521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.370552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.355 qpair failed and we were unable to recover it. 00:26:10.355 [2024-12-10 04:14:04.370661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.355 [2024-12-10 04:14:04.370688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.370803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.370831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.370913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.370938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.371051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.371078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.371161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.371188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.371294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.371327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.371444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.371482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.371653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.371681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.371772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.371799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.371875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.371901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.371977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.372004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.372121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.372153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.372328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.372361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.372466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.372502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.372630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.372660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.372757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.372784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.372888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.372936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.373019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.373048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.373154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.373186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.373309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.373344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.373479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.373511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.373672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.373698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.373798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.373830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.373976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.374007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.374168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.374206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.374380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.374412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.374553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.374601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.374690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.374716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.374828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.374853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.374969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.374994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.375082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.375107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.375247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.375280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.375445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.375478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.375621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.375647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.375782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.375808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.375886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.375911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.376058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.376091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.376336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.376371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.376522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.376566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.356 qpair failed and we were unable to recover it. 00:26:10.356 [2024-12-10 04:14:04.376725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.356 [2024-12-10 04:14:04.376751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.376885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.376911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.376998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.377024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.377202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.377237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.377348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.377394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.377542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.377602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.377714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.377740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.377825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.377850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.377975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.378000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.378189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.378240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.378384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.378422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.378562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.378610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.378734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.378761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.378877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.378903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.379014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.379039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.379215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.379249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.379359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.379402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.379539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.379594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.379702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.379729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.379838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.379864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.379973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.380000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.380115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.380150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.380267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.380312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.380416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.380448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.380624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.380651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.380763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.380793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.380909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.380935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.381077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.381102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.381301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.381333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.381483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.381532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.381687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.381716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.381831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.381857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.381965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.381992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.382146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.382180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.382333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.382367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.382474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.382507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.382658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.382688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.382807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.382833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.382947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.357 [2024-12-10 04:14:04.382973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.357 qpair failed and we were unable to recover it. 00:26:10.357 [2024-12-10 04:14:04.383129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.383162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.383270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.383296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.383459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.383494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.383659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.383699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.383800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.383827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.383935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.383961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.384069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.384094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.384287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.384320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.384453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.384484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.384628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.384655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.384735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.384760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.384869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.384895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.384968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.384993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.385132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.385164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.385332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.385367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.385490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.385539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.385701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.385740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.385841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.385868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.385972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.386019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.386135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.386183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.386294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.386319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.386430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.386456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.386591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.386617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.386732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.386760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.386868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.386907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.387053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.387080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.387195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.387230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.387344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.387370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.387495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.387535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.387693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.387721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.387812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.387840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.387924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.387952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.388078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.388128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.388269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.388304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.388440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.388472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.388645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.388671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.388802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.388834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.388970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.389002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.389138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.358 [2024-12-10 04:14:04.389169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.358 qpair failed and we were unable to recover it. 00:26:10.358 [2024-12-10 04:14:04.389303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.389335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.389504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.389537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.389652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.389677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.389758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.389784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.389858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.389883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.389987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.390033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.390147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.390194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.390345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.390381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.390478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.390510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.390664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.390692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.390800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.390826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.390924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.390957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.391118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.391151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.391250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.391284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.391439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.391488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.391672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.391701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.391838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.391871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.392006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.392039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.392179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.392228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.392399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.392434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.392565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.392611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.392708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.392735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.392871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.392896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.393029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.393065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.393172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.393205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.393323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.393358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.393564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.393621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.393768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.393815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.393934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.393981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.394149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.394196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.394324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.394370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.394485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.394510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.394638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.359 [2024-12-10 04:14:04.394663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.359 qpair failed and we were unable to recover it. 00:26:10.359 [2024-12-10 04:14:04.394771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.394798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.394888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.394913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.395019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.395044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.395139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.395178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.395301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.395341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.395496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.395536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.395639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.395669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.395788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.395813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.395932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.395958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.396041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.396067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.396181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.396207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.396338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.396378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.396501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.396529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.396661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.396691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.396800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.396827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.396970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.397002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.397199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.397231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.397371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.397405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.397554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.397603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.397722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.397749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.397846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.397873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.398018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.398060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.398262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.398295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.398428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.398461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.398573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.398621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.398765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.398791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.398911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.398936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.399049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.399090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.399269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.399303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.399489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.399584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.399730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.399758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.399902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.399928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.400087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.400120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.400280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.400313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.400458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.400485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.400626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.400666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.400769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.400798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.400972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.400998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.401108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.360 [2024-12-10 04:14:04.401134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.360 qpair failed and we were unable to recover it. 00:26:10.360 [2024-12-10 04:14:04.401283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.401316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.401455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.401487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.401697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.401725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.401860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.401893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.402014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.402040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.402154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.402180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.402342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.402374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.402483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.402509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.402631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.402660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.402776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.402802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.402993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.403019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.403132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.403175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.403308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.403340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.403485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.403511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.403657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.403684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.403765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.403791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.403915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.403942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.404055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.404081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.404249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.404281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.404389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.404432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.404553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.404603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.404720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.404747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.404829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.404860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.404979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.405006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.405148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.405181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.405335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.405367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.405464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.405497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.405676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.405704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.405815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.405841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.405958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.405986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.406164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.406197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.406396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.406429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.406572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.406616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.406732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.406758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.406877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.406903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.407012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.407055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.361 [2024-12-10 04:14:04.407216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.361 [2024-12-10 04:14:04.407252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.361 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.407370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.407397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.407558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.407607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.407726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.407752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.407835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.407860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.407997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.408024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.408150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.408189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.408364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.408403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.408515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.408566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.408676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.408703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.408830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.408856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.408970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.409013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.409124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.409159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.409279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.409324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.409471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.409506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.409685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.409711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.409822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.409848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.409955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.409981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.410134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.410169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.410311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.410364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.410508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.410553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.410689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.410715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.410865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.410890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.411043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.411107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.411308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.411373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.411555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.411581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.411714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.411744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.411872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.411907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.412112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.412147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.412347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.412383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.412535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.412597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.412736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.412762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.412852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.412878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.413039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.413078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.413247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.413284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.413399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.413435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.413598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.413625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.413713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.413740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.413844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.413870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.362 qpair failed and we were unable to recover it. 00:26:10.362 [2024-12-10 04:14:04.414018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.362 [2024-12-10 04:14:04.414055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.414220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.414257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.414401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.414438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.414559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.414610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.414725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.414763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.414918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.414954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.415106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.415143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.415320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.415357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.415525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.415591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.415797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.415853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.416043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.416083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.416233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.416271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.416391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.416429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.416542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.416599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.416752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.416792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.416949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.416986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.417130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.417166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.417307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.417370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.417587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.417624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.417796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.417837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.417952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.417989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.418103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.418139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.418265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.418301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.418422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.418460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.418654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.418710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.418893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.418933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.419087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.419126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.419277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.419322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.419466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.419509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.419689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.419725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.419880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.419919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.420051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.420088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.420238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.420275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.420390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.420429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.420603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.420640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.420855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.420891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.421077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.421116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.421300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.421337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.421502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.363 [2024-12-10 04:14:04.421586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.363 qpair failed and we were unable to recover it. 00:26:10.363 [2024-12-10 04:14:04.421763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.421800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.421983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.422020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.422157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.422195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.422353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.422392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.422510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.422556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.422691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.422731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.422867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.422906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.423034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.423072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.423235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.423274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.423433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.423472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.423633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.423673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.423793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.423852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.424049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.424113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.424316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.424380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.424563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.424603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.424752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.424811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.425020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.425059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.425216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.425255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.425404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.425442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.425590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.425628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.425743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.425780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.425950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.425990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.426148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.426186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.426316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.426383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.426557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.426614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.426802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.426840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.426971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.427009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.427166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.427205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.427353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.427398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.427509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.427555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.427677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.427717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.427913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.427952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.428155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.428197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.364 [2024-12-10 04:14:04.428394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.364 [2024-12-10 04:14:04.428458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.364 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.428596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.428667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.428887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.428952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.429151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.429214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.429384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.429427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.429603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.429642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.429825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.429888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.430110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.430173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.430390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.430456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.430694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.430754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.431010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.431052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.431195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.431235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.431357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.431396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.431563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.431644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.431834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.431874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.432064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.432104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.432224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.432263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.432384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.432423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.432573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.432614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.432769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.432809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.433007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.433047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.433198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.433237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.433429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.433471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.433676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.433718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.433885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.433927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.434043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.434084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.434213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.434254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.434412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.434453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.434633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.434674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.434833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.434874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.435008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.435049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.435181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.435222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.435398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.435441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.435616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.435656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.435775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.435815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.435918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.435969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.436133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.436175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.436333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.436372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.436525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.436572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.365 [2024-12-10 04:14:04.436763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.365 [2024-12-10 04:14:04.436801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.365 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.436932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.436971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.437109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.437147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.437313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.437355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.437543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.437590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.437767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.437810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.437969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.438011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.438176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.438217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.438385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.438426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.438593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.438637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.438777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.438819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.438982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.439022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.439180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.439221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.439350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.439391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.439510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.439565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.439717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.439760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.439887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.439928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.440120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.440160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.440300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.440343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.440516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.440567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.440727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.440768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.440936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.440976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.441132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.441174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.441344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.441385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.441558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.441606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.441776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.441818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.442018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.442059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.442222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.442265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.442462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.442504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.442643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.442685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.442881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.442922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.443091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.443132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.443284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.443324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.443491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.443535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.443718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.443759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.443879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.443919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.444055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.444104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.444259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.444299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.444437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.444478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.366 qpair failed and we were unable to recover it. 00:26:10.366 [2024-12-10 04:14:04.444657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.366 [2024-12-10 04:14:04.444701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.444870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.444911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.445073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.445115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.445269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.445310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.445433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.445500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.445708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.445749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.445932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.445976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.446171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.446212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.446379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.446420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.446623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.446665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.446839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.446879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.447051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.447092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.447257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.447298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.447492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.447531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.447705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.447746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.447913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.447953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.448123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.448163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.448326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.448367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.448533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.448588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.448759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.448800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.448961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.449002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.449196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.449238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.449371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.449430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.449628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.449673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.449855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.449902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.450107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.450149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.450285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.450328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.450458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.450504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.450698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.450740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.450905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.450945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.451115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.451155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.451348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.451388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.451538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.451592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.451734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.451776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.451952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.451995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.452139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.452182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.452346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.452389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.452600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.452665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.452849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.367 [2024-12-10 04:14:04.452889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.367 qpair failed and we were unable to recover it. 00:26:10.367 [2024-12-10 04:14:04.453049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.453090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.453247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.453288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.453410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.453451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.453604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.453645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.453827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.453869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.454066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.454109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.454242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.454284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.454454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.454496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.454698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.454739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.454943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.455002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.455213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.455255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.455439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.455481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.455674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.455715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.455871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.455947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.456222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.456286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.456533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.456587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.456762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.456819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.457033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.457075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.457254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.457296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.457464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.457508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.457689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.457733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.457901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.457943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.458076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.458121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.458298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.458341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.458499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.458541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.458710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.458760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.458940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.458984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.459169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.459209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.459355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.459395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.459590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.459634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.459846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.459889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.460073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.460113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.368 [2024-12-10 04:14:04.460246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.368 [2024-12-10 04:14:04.460286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.368 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.460479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.460536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.460730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.460773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.460931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.460974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.461148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.461191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.461354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.461411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.461577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.461619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.461808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.461851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.462025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.462067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.462229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.462276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.462446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.462492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.462683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.462729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.462863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.462908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.463075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.463121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.463265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.463312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.463479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.463526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.463691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.463737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.463876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.463921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.464107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.464152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.464334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.464374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.464515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.464568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.464762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.464824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.464960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.465001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.465234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.465275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.465473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.465532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.465742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.465787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.465967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.466013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.466201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.466247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.466423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.466467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.466649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.466694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.466837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.466881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.467057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.467101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.467272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.467316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.467488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.467571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.467780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.467825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.467956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.468003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.468180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.468226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.369 qpair failed and we were unable to recover it. 00:26:10.369 [2024-12-10 04:14:04.468444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.369 [2024-12-10 04:14:04.468489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.468677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.468724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.468900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.468946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.469178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.469218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.469369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.469428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.469581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.469627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.469807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.469853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.470034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.470081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.470260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.470306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.470491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.470537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.470730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.470792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.470937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.470983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.471113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.471158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.471375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.471421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.471589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.471635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.471834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.471874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.472064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.472128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.472317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.472357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.472519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.472588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.472733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.472780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.472971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.473017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.473191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.473236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.473449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.473496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.473680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.473726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.473938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.473983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.474174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.474213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.474417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.474477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.474717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.474763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.474985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.475025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.475198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.475258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.475442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.475483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.475632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.475675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.475843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.475890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.476102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.476147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.476329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.476374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.476565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.476611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.476844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.476892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.477079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.477124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.477265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.370 [2024-12-10 04:14:04.477311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.370 qpair failed and we were unable to recover it. 00:26:10.370 [2024-12-10 04:14:04.477458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.477506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.477768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.477818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.478025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.478065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.478263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.478326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.478506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.478584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.478758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.478806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.479023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.479071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.479233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.479281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.479479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.479523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.479704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.479751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.479893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.479940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.480117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.480163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.480379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.480424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.480606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.480652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.480884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.480924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.481142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.481187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.481320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.481404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.481654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.481695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.481847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.481910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.482120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.482165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.482342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.482388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.482576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.482626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.482812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.482860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.483082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.483130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.483340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.483389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.483584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.483634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.483846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.483886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.484078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.484136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.484327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.484375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.484563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.484612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.484784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.484832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.485056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.485103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.485302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.485342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.485519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.485588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.485817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.485865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.486101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.486141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.486279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.371 [2024-12-10 04:14:04.486319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.371 qpair failed and we were unable to recover it. 00:26:10.371 [2024-12-10 04:14:04.486443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.486489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.486719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.486768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.486904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.486952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.487170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.487218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.487447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.487494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.487741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.487782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.487969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.488017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.488217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.488257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.488417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.488457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.488605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.488666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.488864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.488913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.489106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.489154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.489334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.489381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.489570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.489621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.489823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.489871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.490067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.490116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.490342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.490391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.490603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.490653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.490859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.490899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.491066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.491105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.491263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.491303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.491531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.491591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.491771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.491818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.492045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.492094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.492294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.492335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.492502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.492556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.492770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.492819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.493055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.493104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.493288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.493337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.493520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.493605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.493851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.493900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.494105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.494156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.494355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.494406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.494645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.494698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.494946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.494986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.495105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.495147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.495334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.495386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.495594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.495636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.372 [2024-12-10 04:14:04.495764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.372 [2024-12-10 04:14:04.495805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.372 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.496020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.496071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.496242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.496301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.496482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.496555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.496719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.496782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.496990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.497043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.497243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.497295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.497494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.497567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.497743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.497792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.497978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.498026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.498216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.498266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.498429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.498471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.498632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.498673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.498923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.498971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.499198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.499246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.499419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.499478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.499767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.499827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.500057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.500117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.500286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.500359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.500581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.500630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.500864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.500905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.501090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.501139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.501368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.501409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.501635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.501687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.501848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.501899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.502110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.502161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.502335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.502385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.502566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.502618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.502817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.502870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.503083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.503134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.503293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.503345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.503572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.503625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.503827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.503877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.504069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.504121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.373 qpair failed and we were unable to recover it. 00:26:10.373 [2024-12-10 04:14:04.504306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.373 [2024-12-10 04:14:04.504358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.504599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.504651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.504784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.504835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.505009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.505060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.505268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.505326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.505493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.505600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.505813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.505864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.506068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.506107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.506234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.506284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.506505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.506571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.506793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.506843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.507047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.507098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.507333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.507385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.507596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.507648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.507824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.507875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.508123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.508174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.508420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.508460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.508638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.508680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.508932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.508984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.509131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.509181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.509363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.509414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.509647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.509700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.509941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.509993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.510194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.510244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.510450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.510500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.510715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.510769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.510941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.510992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.511194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.511245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.511496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.511563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.511745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.511796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.511995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.512052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.512211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.512252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.512454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.512505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.512687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.512739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.512936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.512988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.513204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.513256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.513425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.513476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.513695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.513736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.513939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.374 [2024-12-10 04:14:04.514003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.374 qpair failed and we were unable to recover it. 00:26:10.374 [2024-12-10 04:14:04.514254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.514294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.514494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.514608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.514815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.514866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.515077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.515131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.515393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.515452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.515739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.515792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.515945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.515997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.516248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.516288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.516448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.516523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.516837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.516923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.517181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.517259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.517570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.517643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.517808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.517861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.518057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.518108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.518304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.518354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.518516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.518588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.518795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.518846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.519087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.519139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.519340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.519390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.519571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.519625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.519822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.519874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.520031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.520084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.520302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.520343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.520535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.520609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.520818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.520868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.521071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.521112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.521283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.521343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.521568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.521621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.521787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.521838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.522022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.522076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.522284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.522335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.522535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.522602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.522840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.522892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.523143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.523194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.523390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.523443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.523635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.523688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.523896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.523949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.524172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.524227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.375 [2024-12-10 04:14:04.524441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.375 [2024-12-10 04:14:04.524497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.375 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.524752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.524809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.525057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.525112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.525264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.525318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.525573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.525630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.525851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.525906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.526143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.526203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.526420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.526474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.526709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.526766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.527022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.527076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.527288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.527343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.527599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.527665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.527873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.527929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.528147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.528202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.528362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.528418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.528654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.528709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.528924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.528979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.529165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.529220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.529442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.529501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.529741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.529782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.529946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.529986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.530214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.530270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.530506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.530595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.530825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.530881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.531061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.531094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.531216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.531250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.531390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.531424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.531579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.531638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.531781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.531826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.532007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.532049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.532194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.532239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.532413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.532465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.532638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.532682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.532854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.532899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.533110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.533155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.533336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.533380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.533618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.533677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.533892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.533952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.534150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.534206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.376 [2024-12-10 04:14:04.534450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.376 [2024-12-10 04:14:04.534506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.376 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.534705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.534760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.534937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.535001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.535288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.535345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.535562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.535619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.535853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.535907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.536123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.536179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.536409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.536465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.536709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.536768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.537022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.537076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.537255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.537311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.537473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.537531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.537790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.537855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.538075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.538132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.538391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.538448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.538693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.538750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.539026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.539084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.539350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.539405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.539586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.539644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.539832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.539897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.540141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.540198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.540367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.540443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.540752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.540808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.540996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.541053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.541254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.541310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.541521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.541591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.541785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.541840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.542067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.542125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.542332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.542403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.542624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.542681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.542935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.542992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.543209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.543263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.543533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.543615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.543840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.543895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.544153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.377 [2024-12-10 04:14:04.544215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.377 qpair failed and we were unable to recover it. 00:26:10.377 [2024-12-10 04:14:04.544488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.544578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.544853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.544910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.545140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.545196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.545456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.545511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.545754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.545834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.546038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.546098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.546290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.546346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.546587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.546646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.546866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.546932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.547158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.547213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.547431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.547485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.547727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.547785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.548007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.548063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.548276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.548339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.548620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.548677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.548861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.548915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.549161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.549227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.549411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.549475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.549682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.549736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.549990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.550045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.550312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.550368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.550539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.550610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.550825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.550879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.551137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.551195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.551383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.551438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.551662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.551717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.551911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.551968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.552190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.552244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.552460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.552531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.552755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.552834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.553061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.553113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.553295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.553348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.553567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.553656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.553871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.553931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.554120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.554171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.554383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.554448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.554701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.554755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.554920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.554973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.378 [2024-12-10 04:14:04.555173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.378 [2024-12-10 04:14:04.555236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.378 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.555422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.555475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.555694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.555756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.555947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.556029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.556272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.556328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.556527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.556643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.556910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.556987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.557209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.557276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.557504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.557577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.557764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.557844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.558018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.558103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.558293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.558347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.558507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.558578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.558806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.558870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.559087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.559156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.559382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.559443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.559708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.559772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.560029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.560092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.560325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.560381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.560606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.560680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.560967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.561042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.561362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.561433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.561695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.561756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.562010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.562077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.562330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.562396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.562630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.562692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.562941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.563013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.563310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.563373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.563567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.563647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.563886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.563963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.564222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.564282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.564506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.564571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.564807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.564871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.565098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.565164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.565424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.565476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.565701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.565757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.565968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.566021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.566222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.566276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.566520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.566620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.379 qpair failed and we were unable to recover it. 00:26:10.379 [2024-12-10 04:14:04.566820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.379 [2024-12-10 04:14:04.566872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.567048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.567100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.567277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.567329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.567584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.567643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.567854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.567910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.568132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.568191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.568376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.568434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.568667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.568726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.568907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.568966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.569178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.569236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.569417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.569477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.569751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.569812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.570026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.570082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.570293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.570363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.570644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.570703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.570894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.570950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.571175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.571248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.571434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.571492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.571727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.571786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.571996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.572057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.572296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.572364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.572538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.572612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.572892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.572956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.573242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.573304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.573531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.573629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.573882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.573952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.574227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.574283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.574466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.574525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.574755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.574814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.575053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.575114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.575353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.575414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.575669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.575733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.575908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.575970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.576130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.576191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.576417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.576492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.576719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.576782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.576975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.577035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.577243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.577305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.577523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.380 [2024-12-10 04:14:04.577605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.380 qpair failed and we were unable to recover it. 00:26:10.380 [2024-12-10 04:14:04.577877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.577939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.578187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.578249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.578526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.578610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.578795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.578856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.579077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.579141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.579391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.579458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.579731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.579793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.580031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.580094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.580367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.580458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.580722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.580788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.581022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.581082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.581326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.581392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.581668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.581747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.581936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.581997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.582187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.582249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.582482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.582563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.582794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.582864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.583137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.583198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.583495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.583593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.583906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.583968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.584201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.584263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.584497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.584603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.584800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.584861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.585132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.585192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.585457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.585519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.585724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.585786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.586022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.586084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.586296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.586372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.586619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.586683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.586881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.586941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.587183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.587245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.587432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.587493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.587734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.587824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.588071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.588140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.588391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.588454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.588714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.588778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.589010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.589070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.589339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.589399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.589627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.381 [2024-12-10 04:14:04.589690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.381 qpair failed and we were unable to recover it. 00:26:10.381 [2024-12-10 04:14:04.589873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.589938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.590196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.590258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.590473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.590540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.590815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.590881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.591168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.591228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.591484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.591579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.591817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.591880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.592140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.592204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.592499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.592579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.592851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.592923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.593138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.593203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.593448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.593511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.593759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.593826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.594104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.594171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.594463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.594524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.594777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.594839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.595025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.595086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.595371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.595435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.595676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.595743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.596029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.596092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.596299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.596362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.596608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.596670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.596905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.596967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.597238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.597298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.597562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.597626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.597870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.597935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.598184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.598247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.598516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.598596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.598827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.598897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.599094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.599154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.599385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.599444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.599700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.599760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.599983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.600045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.382 [2024-12-10 04:14:04.600279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.382 [2024-12-10 04:14:04.600340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.382 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.600577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.600638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.600874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.600934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.601194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.601255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.601478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.601538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.601787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.601848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.602025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.602083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.602286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.602345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.602555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.602617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.602861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.602928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.603191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.603268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.603604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.603671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.603954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.604020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.604279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.604344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.604635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.604701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.604908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.604971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.605160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.605241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.605459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.605524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.605840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.605907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.606151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.606216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.606450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.606514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.606744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.606810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.607064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.607128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.607395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.607460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.607726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.607795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.608033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.608097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.608326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.608393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.608642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.608709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.608967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.609032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.609324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.609390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.609664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.609735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.610012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.610078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.610293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.610357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.610586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.610653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.610946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.611011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.611251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.611316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.611566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.611635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.611891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.611957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.612223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.612288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.383 [2024-12-10 04:14:04.612579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.383 [2024-12-10 04:14:04.612646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.383 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.612933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.612998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.613264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.613331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.613616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.613683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.613905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.613971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.614264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.614328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.614610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.614676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.614879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.614945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.615217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.615284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.615528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.615618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.615875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.615939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.616177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.616241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.616491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.616579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.616802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.616869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.617118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.617184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.617442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.617508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.617735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.617800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.618092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.618167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.618411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.618475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.618685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.618751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.618971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.619037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.619285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.619350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.619591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.619660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.619911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.619976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.620200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.620266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.620517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.620598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.620843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.620907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.621144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.621208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.621411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.621475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.621700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.621765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.622082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.622148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.622417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.622482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.622692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.622759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.622974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.623039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.623293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.623357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.623596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.623662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.623909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.623976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.624234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.624300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.624620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.384 [2024-12-10 04:14:04.624686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.384 qpair failed and we were unable to recover it. 00:26:10.384 [2024-12-10 04:14:04.624949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.625014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.625256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.625322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.625575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.625642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.625917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.625984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.626302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.626375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.626606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.626675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.626903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.626969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.627235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.627301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.627639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.627706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.627913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.627981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.628272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.628337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.628537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.628617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.628899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.628964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.629215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.629280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.629532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.629629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.629897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.629965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.630224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.630288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.630584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.630650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.630934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.631009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.631251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.631316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.631537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.631636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.631880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.631945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.632150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.632226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.632492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.632574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.632865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.632939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.633159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.633223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.633503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.633603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.633868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.633946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.634209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.634274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.634525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.634612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.634825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.634891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.635176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.635241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.635496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.635595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.635824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.635891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.636149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.636214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.636459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-12-10 04:14:04.636535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.385 qpair failed and we were unable to recover it. 00:26:10.385 [2024-12-10 04:14:04.636810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.636877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.637158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.637223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.637422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.637489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.637792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.637858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.638104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.638169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.638429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.638497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.638791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.638857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.639111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.639184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.639436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.639502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.639785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.639851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.640110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.640179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.640432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.640500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.640787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.640852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.641063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.641127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.641326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.641392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.641587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.641653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.641911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.641980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.642220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.642286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.642591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.642670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.642871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.642939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.643152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.643218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.643498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.643594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.643862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.643939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.644187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.644251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.644452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.644516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.644830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.644895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.645091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.645155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.645410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.645476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.645743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.645822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.646048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.646112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.646393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.646458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.646692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.646759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.647025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.647089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.647312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-12-10 04:14:04.647380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.386 qpair failed and we were unable to recover it. 00:26:10.386 [2024-12-10 04:14:04.647626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.647700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.648011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.648074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.648325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.648390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.648684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.648750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.648978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.649055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.649319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.649385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.649586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.649653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.649864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.649929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.650183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.650247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.650491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.650584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.650834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.650898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.651113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.651181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.651406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.651472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.651789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.651855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.652057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.652122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.652350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.652417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.652638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.652703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.652975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.653047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.653324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.653390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.653598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.653670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.653889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.653954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.654159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.654223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.654468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.654532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.654774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.654841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.655111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.655179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.655461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.655535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.655809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.655874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.656163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.656227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.656424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.656502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.656724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.656789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.657090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.657157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.657406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.657475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.657739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.657805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.658029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.658093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.658346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.658411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.658674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.658740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.659033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.659098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.659361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.659427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.387 qpair failed and we were unable to recover it. 00:26:10.387 [2024-12-10 04:14:04.659629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-12-10 04:14:04.659697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.659946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.660010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.660254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.660321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.660526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.660639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.660947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.661012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.661232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.661299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.661558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.661625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.661885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.661950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.662147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.662215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.662436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.662506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.662809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.662876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.663166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.663232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.663482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.663568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.663829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.663894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.664112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.664179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.664400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.664478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.664738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.664805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.665073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.665141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.665372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.665435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.665671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.665749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.666003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.666069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.666350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.666413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.666610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.666677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.666873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.666937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.667171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.667236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.667495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.667579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.667807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.667873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.668163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.668228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.668480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.668559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.668779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.668843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.669094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.669175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.669398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.669466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.669763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.669841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.670116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.670180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.670461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.670526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.670795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.670859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.671077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.671142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.671399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.671467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.671736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.671814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.388 qpair failed and we were unable to recover it. 00:26:10.388 [2024-12-10 04:14:04.672066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.388 [2024-12-10 04:14:04.672130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.672377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.672441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.672658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.672724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.672968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.673045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.673355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.673421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.673739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.673807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.674109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.674174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.674369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.674435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.674715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.674781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.675042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.675120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.675366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.675430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.675732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.675799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.676013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.676078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.676334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.676398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.676685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.676752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.677032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.677099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.677394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.677469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.677731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.677808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.678103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.678169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.678453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.678516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.678829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.678894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.679156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.679232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.679488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.679566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.679861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.679926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.680128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.680195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.680450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.680516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.680767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.680840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.681046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.681112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.681406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.681471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.681771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.681837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.682078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.682143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.682361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.682450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.682733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.682800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.683095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.389 [2024-12-10 04:14:04.683162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.389 qpair failed and we were unable to recover it. 00:26:10.389 [2024-12-10 04:14:04.683413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.683480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.683782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.683848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.684074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.684137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.684390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.684463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.684713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.684778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.685049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.685114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.685363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.685428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.685671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.685736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.685982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.686045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.686263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.686334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.686609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.686677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.686919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.686988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.687223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.687287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.687542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.687621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.687903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.687967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.688210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.688282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.688564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.688629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.688854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.688920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.689161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.689224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.689466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.689532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.689834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.689900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.690149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.690227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.690479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.690565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.690833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.690898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.691099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.691164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.691328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.691392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.691677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.691743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.691959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.692036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.692348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.692414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.692664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.692732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.693031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.693095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.693347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.693410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.693665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.693732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.694025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.694091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.694315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.694381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.694638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.694704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.694878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.694943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.695232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.695307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.390 [2024-12-10 04:14:04.695512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.390 [2024-12-10 04:14:04.695597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.390 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.695930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.695996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.696294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.696360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.696607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.696673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.696893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.696957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.697145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.697208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.697493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.697572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.697829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.697893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.698159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.698225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.698443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.698508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.698791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.698857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.699149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.699212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.699406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.699479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.699775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.699841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.700139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.700205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.700455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.700519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.700795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.700860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.701051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.701117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.701362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.701428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.701633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.701710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.701969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.702032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.702250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.702313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.702603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.702669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.702925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.702988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.703276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.703343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.703630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.703708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.704036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.704147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.704429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.704500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.704766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.704838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.705082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.705150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.705392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.705457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.705752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.705824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.706108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.706176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.391 [2024-12-10 04:14:04.706420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.391 [2024-12-10 04:14:04.706490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.391 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.706767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.706848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.707097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.707163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.707378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.707463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.707700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.707767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.707969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.708036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.708325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.708409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.708672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.708742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.708972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.709038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.709287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.709359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.709615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.709684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.709976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.710042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.710254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.710334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.710605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.710676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.710935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.710999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.711210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.711283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.711503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.711587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.711841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.711907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.712132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.712201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.712433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.712499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.712821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.712888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.713158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.713228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.713523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.713613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.713906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.713981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.714259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.714326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.714607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.714678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.714931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.715015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.715317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.715383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.715669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.715736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.392 [2024-12-10 04:14:04.716000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.392 [2024-12-10 04:14:04.716076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.392 qpair failed and we were unable to recover it. 00:26:10.665 [2024-12-10 04:14:04.716385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.665 [2024-12-10 04:14:04.716452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.665 qpair failed and we were unable to recover it. 00:26:10.665 [2024-12-10 04:14:04.716676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.665 [2024-12-10 04:14:04.716745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.665 qpair failed and we were unable to recover it. 00:26:10.665 [2024-12-10 04:14:04.717001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.665 [2024-12-10 04:14:04.717085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.665 qpair failed and we were unable to recover it. 00:26:10.665 [2024-12-10 04:14:04.717361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.665 [2024-12-10 04:14:04.717426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.665 qpair failed and we were unable to recover it. 00:26:10.665 [2024-12-10 04:14:04.717707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.665 [2024-12-10 04:14:04.717774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.665 qpair failed and we were unable to recover it. 00:26:10.665 [2024-12-10 04:14:04.718001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.665 [2024-12-10 04:14:04.718079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.665 qpair failed and we were unable to recover it. 00:26:10.665 [2024-12-10 04:14:04.718323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.665 [2024-12-10 04:14:04.718391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.665 qpair failed and we were unable to recover it. 00:26:10.665 [2024-12-10 04:14:04.718640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.665 [2024-12-10 04:14:04.718707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.665 qpair failed and we were unable to recover it. 00:26:10.665 [2024-12-10 04:14:04.718913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.665 [2024-12-10 04:14:04.718996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.665 qpair failed and we were unable to recover it. 00:26:10.665 [2024-12-10 04:14:04.719268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.665 [2024-12-10 04:14:04.719337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.665 qpair failed and we were unable to recover it. 00:26:10.665 [2024-12-10 04:14:04.719583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.665 [2024-12-10 04:14:04.719651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.665 qpair failed and we were unable to recover it. 00:26:10.665 [2024-12-10 04:14:04.719942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.665 [2024-12-10 04:14:04.720022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.665 qpair failed and we were unable to recover it. 00:26:10.665 [2024-12-10 04:14:04.720322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.665 [2024-12-10 04:14:04.720390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.665 qpair failed and we were unable to recover it. 00:26:10.665 [2024-12-10 04:14:04.720637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.665 [2024-12-10 04:14:04.720703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.665 qpair failed and we were unable to recover it. 00:26:10.665 [2024-12-10 04:14:04.720974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.721044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.721327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.721395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.721614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.721703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.721969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.722048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.722315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.722381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.722633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.722702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.722951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.723019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.723256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.723323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.723593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.723661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.723901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.723965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.724249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.724318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.724608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.724675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.724973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.725038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.725300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.725369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.725624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.725694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.725954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.726019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.726319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.726387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.726668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.726736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.726933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.727001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.727274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.727342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.727565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.727643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.727893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.727961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.728160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.728241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.728500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.728585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.728847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.728912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.729157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.729222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.729460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.729528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.729853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.729920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.730115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.730181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.730401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.730481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.730772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.730871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.731147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.731215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.731463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.731531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.731867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.731932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.732197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.732285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.732633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.732723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.733075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.733165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.733486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.733596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.666 [2024-12-10 04:14:04.733905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.666 [2024-12-10 04:14:04.733997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.666 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.734323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.734421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.734807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.734897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.735204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.735298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.735643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.735726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.735999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.736071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.736294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.736359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.736581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.736647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.736875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.736940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.737190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.737254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.737540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.737647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.737957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.738042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.738351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.738440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.738760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.738849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.739189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.739279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.739628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.739720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.740062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.740150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.740463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.740577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.740868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.740936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.741139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.741206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.741491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.741575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.741829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.741892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.742112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.742177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.742425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.742510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.742825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.742916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.743242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.743332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.743631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.743720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.744074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.744158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.744472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.744575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.744832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.744917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.745265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.745349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.745706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.745782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.746003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.746081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.746330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.746394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.746648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.746715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.746967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.747032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.747238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.747305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.747516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.747601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.747907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.747997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.667 [2024-12-10 04:14:04.748352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.667 [2024-12-10 04:14:04.748441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.667 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.748830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.748918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.749216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.749302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.749657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.749745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.750070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.750158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.750473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.750589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.750891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.750977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.751253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.751321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.751617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.751684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.751927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.751991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.752280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.752343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.752573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.752638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.752927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.753020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.753373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.753461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.753790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.753881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.754187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.754270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.754612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.754701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.755046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.755133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.755499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.755608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.755913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.755983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.756283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.756352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.756645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.756713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.756990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.757055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.757303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.757368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.757637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.757723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.758015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.758105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.758411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.758499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.758816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.758904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.759249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.759335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.759684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.759775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.760137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.760223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.760520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.760627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.760951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.761050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.761357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.761425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.761647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.761717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.761900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.761964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.762195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.762258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.762566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.762633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.762882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.762947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.668 qpair failed and we were unable to recover it. 00:26:10.668 [2024-12-10 04:14:04.763231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.668 [2024-12-10 04:14:04.763295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.763563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.763628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.763845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.763909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.764114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.764177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.764430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.764495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.764769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.764850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.765098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.765163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.765377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.765440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.765708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.765773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.765975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.766038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.766216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.766279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.766521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.766605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.766855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.766920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.767167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.767230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.767478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.767541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.767814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.767878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.768177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.768240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.768500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.768581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.768813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.768877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.769084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.769147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.769401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.769476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.769749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.769814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.770056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.770119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.770372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.770436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.770753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.770818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.771021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.771088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.771337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.771400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.771639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.771706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.771954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.772017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.772232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.772295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.772542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.772620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.772806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.772869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.773108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.773171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.773451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.773514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.773790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.773853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.774047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.774110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.774336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.669 [2024-12-10 04:14:04.774398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.669 qpair failed and we were unable to recover it. 00:26:10.669 [2024-12-10 04:14:04.774642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.774706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.774989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.775052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.775256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.775319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.775532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.775620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.775858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.775921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.776139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.776215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.776505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.776593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.776853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.776917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.777225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.777307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.777538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.777623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.777872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.777948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.778192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.778257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.778499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.778581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.778789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.778853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.779134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.779198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.779417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.779480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.779715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.779783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.780003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.780066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.780329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.780392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.780689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.780753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.781006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.781067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.781354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.781415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.781620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.781684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.781927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.781992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.782258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.782323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.782511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.782591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.782844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.782909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.783204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.783267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.783515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.783613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.783872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.783936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.784150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.784214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.784467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.784531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.784846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.784909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.670 qpair failed and we were unable to recover it. 00:26:10.670 [2024-12-10 04:14:04.785195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.670 [2024-12-10 04:14:04.785259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.785543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.785625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.785924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.785988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.786200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.786264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.786515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.786598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.786852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.786916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.787136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.787200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.787493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.787591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.787847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.787911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.788117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.788180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.788417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.788481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.788750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.788814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.789057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.789120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.789370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.789434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.789692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.789757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.789978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.790040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.790253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.790320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.790521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.790604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.790856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.790920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.791212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.791276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.791489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.791590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.791807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.791870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.792110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.792172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.792383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.792446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.792704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.792768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.793010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.793073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.793364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.793427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.793713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.793778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.794056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.794118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.794359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.794422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.794661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.794726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.795009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.795073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.795370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.795433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.795679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.795744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.796010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.796073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.796282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.796345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.796581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.796646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.796897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.796960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.797203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.797266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.671 qpair failed and we were unable to recover it. 00:26:10.671 [2024-12-10 04:14:04.797476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.671 [2024-12-10 04:14:04.797539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.797816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.797880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.798115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.798178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.798411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.798475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.798702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.798766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.798930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.798993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.799275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.799349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.799608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.799674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.799924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.799988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.800181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.800245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.800485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.800562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.800788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.800850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.801127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.801188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.801409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.801470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.801730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.801792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.802042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.802104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.802319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.802380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.802594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.802655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.802870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.802931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.803126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.803187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.803425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.803486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.803753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.803816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.804108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.804169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.804417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.804478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.804788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.804851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.805132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.805194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.805455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.805516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.805755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.805817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.806054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.806115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.806315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.806378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.806589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.806652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.806866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.806928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.807127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.807187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.807432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.807504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.807789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.807853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.808139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.808202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.808486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.808564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.808869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.808933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.809133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.809197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.672 [2024-12-10 04:14:04.809441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.672 [2024-12-10 04:14:04.809508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.672 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.809775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.809839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.810043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.810107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.810283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.810347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.810579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.810644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.810867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.810930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.811178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.811243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.811490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.811572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.811836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.811900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.812175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.812240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.812455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.812519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.812834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.812899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.813145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.813209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.813407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.813470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.813709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.813775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.814025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.814089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.814283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.814346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.814569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.814634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.814919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.814983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.815265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.815327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.815622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.815688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.815941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.816015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.816237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.816300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.816519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.816601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.816858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.816921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.817129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.817192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.817441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.817505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.817736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.817799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.818083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.818147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.818353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.818415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.818702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.818766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.819002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.819066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.819260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.819322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.819604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.819670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.819919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.819984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.820283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.820347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.820573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.820638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.820934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.820997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.821241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.821304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.821535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.673 [2024-12-10 04:14:04.821611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.673 qpair failed and we were unable to recover it. 00:26:10.673 [2024-12-10 04:14:04.821830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.821895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.822145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.822207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.822457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.822520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.822730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.822794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.823041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.823103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.823353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.823416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.823682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.823748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.823997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.824061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.824314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.824377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.824660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.824725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.824989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.825053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.825287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.825350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.825600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.825665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.825878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.825941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.826120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.826183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.826425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.826488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.826758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.826822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.827100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.827163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.827412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.827476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.827750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.827815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.828035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.828098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.828380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.828443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.828678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.828755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.829008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.829071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.829311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.829375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.829654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.829719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.829966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.830029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.830267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.830331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.830588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.830657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.830883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.830947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.831197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.831261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.831466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.831531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.831792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.831856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.832113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.832176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.674 [2024-12-10 04:14:04.832418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.674 [2024-12-10 04:14:04.832484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.674 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.832733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.832797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.833049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.833114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.833344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.833408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.833652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.833716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.833964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.834028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.834308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.834372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.834654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.834718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.834970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.835033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.835308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.835371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.835572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.835637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.835856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.835920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.836151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.836214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.836497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.836574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.836840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.836903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.837148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.837221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.837466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.837530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.837813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.837876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.838079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.838144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.838428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.838492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.838716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.838780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.839036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.839099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.839354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.839419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.839692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.839756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.839989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.840052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.840250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.840315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.840605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.840669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.840960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.841024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.841314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.841378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.841638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.841703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.841954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.842018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.842200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.842264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.842506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.842582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.842840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.842903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.843137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.843201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.843442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.843506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.843772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.843837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.844041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.844103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.844353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.675 [2024-12-10 04:14:04.844416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.675 qpair failed and we were unable to recover it. 00:26:10.675 [2024-12-10 04:14:04.844703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.844768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.845013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.845075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.845359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.845422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.845678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.845753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.846011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.846074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.846320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.846384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.846644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.846708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.846915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.846978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.847253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.847316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.847600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.847664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.847904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.847966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.848201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.848264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.848462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.848525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.848767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.848831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.849072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.849134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.849385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.849449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.849736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.849801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.850061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.850126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.850371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.850435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.850670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.850735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.850981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.851045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.851309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.851371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.851615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.851680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.851976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.852040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.852269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.852332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.852624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.852690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.852902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.852965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.853207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.853270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.853493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.853571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.853813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.853878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.854120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.854183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.854401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.854463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.854752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.854816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.855072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.855134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.855377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.855440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.855678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.855742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.855949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.856012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.856224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.856287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.856534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.856615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.676 qpair failed and we were unable to recover it. 00:26:10.676 [2024-12-10 04:14:04.856812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.676 [2024-12-10 04:14:04.856875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.857114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.857177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.857417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.857479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.857806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.857870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.858117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.858180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.858391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.858454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.858692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.858757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.859042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.859105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.859362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.859425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.859716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.859781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.859992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.860054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.860293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.860356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.860602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.860667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.860864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.860926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.861136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.861200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.861492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.861572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.861791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.861853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.862094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.862158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.862419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.862483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.862756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.862822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.863083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.863146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.863429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.863493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.863764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.863828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.864111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.864174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.864391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.864455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.864719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.864783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.865029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.865092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.865346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.865410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.865661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.865726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.865983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.866046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.866283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.866347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.866640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.866706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.866964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.867038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.867300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.867363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.867611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.867677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.867939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.868003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.868258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.868321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.868518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.868594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.868839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.677 [2024-12-10 04:14:04.868903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.677 qpair failed and we were unable to recover it. 00:26:10.677 [2024-12-10 04:14:04.869107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.869173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.869461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.869525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.869807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.869870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.870173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.870236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.870492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.870572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.870821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.870884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.871162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.871225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.871532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.871611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.871855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.871918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.872172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.872236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.872480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.872564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.872863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.872927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.873174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.873238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.873539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.873640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.873895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.873958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.874215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.874277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.874531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.874612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.874824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.874888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.875126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.875189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.875439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.875502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.875776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.875850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.876138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.876200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.876452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.876515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.876791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.876855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.877103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.877167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.877413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.877476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.877723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.877786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.878029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.878094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.878309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.878373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.878607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.878672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.878887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.878950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.879157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.879220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.879459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.879522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.879753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.879817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.880021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.880084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.880361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.880424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.880690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.880754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.880966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.881032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.678 [2024-12-10 04:14:04.881278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.678 [2024-12-10 04:14:04.881341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.678 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.881633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.881698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.881896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.881960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.882215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.882278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.882519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.882605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.882859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.882923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.883173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.883236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.883543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.883625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.883866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.883930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.884175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.884248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.884533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.884613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.884860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.884923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.885142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.885205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.885413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.885476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.885693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.885757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.886006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.886070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.886358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.886421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.886711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.886776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.887030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.679 [2024-12-10 04:14:04.887092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.679 qpair failed and we were unable to recover it. 00:26:10.679 [2024-12-10 04:14:04.887317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.887380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.887623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.887687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.887971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.888034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.888236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.888296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.888640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.888740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.889019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.889091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.889301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.889367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.889595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.889666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.889885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.889969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.890192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.890262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.890518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.890608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.890861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.890927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.891199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.891268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.891532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.891624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.891834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.891918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.892160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.892230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.892515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.892609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.892895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.892974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.893213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.893280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.893532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.893624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.893876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.893941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.894226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.894294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.894564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.894635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.894925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.894989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.895254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.895321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.895526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.895618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.895915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.895998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.896308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.896375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.896673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.896741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.897032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.897098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.897374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.897441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.897678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.897748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.897990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.898057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.898264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.898345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.898663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.898732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.898951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.899018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.899231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.899318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.899535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.899626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.680 [2024-12-10 04:14:04.899877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.680 [2024-12-10 04:14:04.899942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.680 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.900191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.900272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.900536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.900629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.900872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.900937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.901151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.901216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.901498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.901589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.901895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.901961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.902212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.902278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.902586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.902655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.902860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.902925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.903181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.903249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.903575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.903645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.903891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.903955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.904206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.904275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.904565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.904635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.904853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.904919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.905203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.905269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.905489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.905589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.905895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.905961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.906231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.906314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.906572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.906643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.906932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.907002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.907248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.907313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.907582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.907649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.907939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.908005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.908243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.908308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.908494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.908591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.908813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.908881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.909176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.909240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.909490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.909577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.909807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.909875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.910132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.910197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.910437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.910502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.910785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.910852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.911087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.911153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.911448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.911513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.911746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.911811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.912090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.912156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.912400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.912464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.681 qpair failed and we were unable to recover it. 00:26:10.681 [2024-12-10 04:14:04.912724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.681 [2024-12-10 04:14:04.912790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.913048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.913113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.913399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.913464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.913736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.913802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.914089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.914154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.914413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.914477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.914707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.914772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.915043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.915109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.915354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.915419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.915629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.915695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.915984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.916048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.916258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.916323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.916515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.916611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.916853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.916917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.917178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.917242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.917521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.917607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.917850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.917915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.918173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.918238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.918527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.918609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.918871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.918936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.919176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.919251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.919487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.919573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.919831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.919898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.920188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.920254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.920580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.920648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.920856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.920922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.921130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.921165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.921342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.921377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.921524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.921574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.921710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.921746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.921888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.921924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.922084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.922120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.922293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.922357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.922576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.922634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.922787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.922823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.923045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.923110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.923403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.923467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.923698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.923734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.923954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.682 [2024-12-10 04:14:04.924020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.682 qpair failed and we were unable to recover it. 00:26:10.682 [2024-12-10 04:14:04.924264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.924331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.924623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.924660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.924764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.924800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.925074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.925139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.925438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.925502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.925709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.925745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.925912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.925980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.926232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.926297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.926609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.926646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.926823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.926892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.927092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.927155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.927445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.927509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.927708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.927745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.927942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.928008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.928302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.928367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.928618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.928655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.928781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.928816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.929027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.929092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.929380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.929445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.929697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.929733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.929877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.929913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.930182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.930256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.930508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.930598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.930772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.930808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.931011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.931075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.931321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.931388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.931605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.931643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.931763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.931799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.931947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.931983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.932103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.932138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.932313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.932348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.932629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.932666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.932815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.932850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.932994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.933031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.933170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.933205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.933366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.933433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.933694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.933760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.934022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.934086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.683 qpair failed and we were unable to recover it. 00:26:10.683 [2024-12-10 04:14:04.934362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.683 [2024-12-10 04:14:04.934427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.934682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.934749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.934994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.935058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.935343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.935408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.935655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.935721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.935937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.936004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.936202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.936268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.936452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.936517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.936807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.936907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.937170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.937239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.937471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.937540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.937820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.937886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.938174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.938237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.938450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.938515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.938817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.938883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.939131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.939194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.939494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.939578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.939810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.939875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.940166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.940231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.940469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.940534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.940801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.940869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.941149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.941213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.941509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.941596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.941847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.941915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.942214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.942279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.942522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.942606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.942889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.942954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.943238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.943302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.943567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.943633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.943886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.943950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.944241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.944306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.944595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.944632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.944771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.944806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.945066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.945132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.684 [2024-12-10 04:14:04.945419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.684 [2024-12-10 04:14:04.945484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.684 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.945784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.945820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.945993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.946028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.946273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.946338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.946585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.946651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.946913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.946978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.947247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.947315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.947624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.947691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.947927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.947991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.948237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.948302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.948585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.948651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.948931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.948995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.949288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.949352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.949603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.949668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.949848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.949914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.950198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.950262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.950461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.950541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.950841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.950877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.951024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.951059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.951259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.951324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.951528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.951606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.951810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.951876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.952113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.952177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.952455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.952521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.952797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.952861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.953043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.953107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.953365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.953430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.953643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.953708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.953984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.954019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.954165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.954202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.954354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.954413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.954705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.954772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.955011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.955075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.955264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.955330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.955599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.955665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.955947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.956012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.956311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.956375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.956601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.956666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.956956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.957021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.685 qpair failed and we were unable to recover it. 00:26:10.685 [2024-12-10 04:14:04.957269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.685 [2024-12-10 04:14:04.957333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.957576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.957641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.957928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.957993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.958205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.958272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.958570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.958637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.958923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.958988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.959275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.959341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.959599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.959665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.959895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.959959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.960253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.960319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.960573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.960639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.960851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.960918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.961122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.961187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.961471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.961536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.961750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.961817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.962057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.962122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.962361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.962426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.962711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.962788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.963086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.963151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.963424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.963487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.963712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.963778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.964067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.964132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.964376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.964440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.964746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.964811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.965111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.965175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.965461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.965524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.965833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.965897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.966157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.966222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.966507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.966591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.966838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.966903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.967147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.967214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.967485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.967565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.967816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.967881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.968184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.968250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.968495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.968596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.968852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.968920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.969223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.969287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.969581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.969649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.969940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.686 [2024-12-10 04:14:04.970005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.686 qpair failed and we were unable to recover it. 00:26:10.686 [2024-12-10 04:14:04.970261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.970326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.970634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.970700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.970991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.971056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.971295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.971359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.971655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.971720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.971978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.972042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.972289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.972353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.972662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.972730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.973021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.973086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.973352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.973416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.973641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.973706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.973916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.973983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.974188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.974254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.974502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.974537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.974663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.974699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.974816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.974852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.974997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.975030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.975315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.975379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.975628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.975670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.975831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.975864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.976138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.976203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.976452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.976517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.976781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.976846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.977080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.977147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.977337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.977401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.977662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.977728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.977948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.978013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.978259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.978323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.978573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.978639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.978926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.978991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.979279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.979314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.979461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.979496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.979741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.979809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.980076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.980141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.980384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.980449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.687 qpair failed and we were unable to recover it. 00:26:10.687 [2024-12-10 04:14:04.980775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.687 [2024-12-10 04:14:04.980842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.981082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.981147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.981382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.981447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.981717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.981784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.982029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.982092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.982337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.982404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.982692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.982728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.982871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.982906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.983024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.983057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.983209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.983272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.983566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.983633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.983932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.983996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.984200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.984265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.984541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.984632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.984836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.984901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.985152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.985216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.985504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.985583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.985876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.985941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.986217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.986282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.986576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.986642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.986927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.986992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.987192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.987257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.987541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.987623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.987874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.987948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.988244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.988309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.988607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.988673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.988922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.988987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.989269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.989333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.989582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.989648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.989932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.989996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.990223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.990286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.990524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.990602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.990849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.990917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.991196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.688 [2024-12-10 04:14:04.991260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.688 qpair failed and we were unable to recover it. 00:26:10.688 [2024-12-10 04:14:04.991507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.991585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.991835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.991900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.992191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.992255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.992577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.992643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.992925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.992989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.993234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.993299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.993539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.993620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.993904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.993968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.994252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.994317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.994610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.994676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.994933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.994997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.995280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.995316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.995486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.995522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.995837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.995902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.996085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.996148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.996426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.996490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.996769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.996835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.997123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.997187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.997432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.997496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.997792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.997857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.998054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.998118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.998319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.998383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.998664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.998730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.998968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.999033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.999282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.999346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.999635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.999701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:04.999955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:04.999990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:05.000134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:05.000170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:05.000358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:05.000423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:05.000668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:05.000744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:05.001038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:05.001104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:05.001405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:05.001468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:05.001769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:05.001835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:05.002093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:05.002160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:05.002451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:05.002514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:05.002790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:05.002854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:05.003056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:05.003121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:05.003406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:05.003470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.689 [2024-12-10 04:14:05.003756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.689 [2024-12-10 04:14:05.003821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.689 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.004001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.004066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.004353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.004418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.004634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.004701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.004995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.005059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.005346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.005411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.005660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.005725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.005973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.006038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.006253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.006317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.006599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.006665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.006911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.006976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.007197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.007261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.007565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.007632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.007883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.007948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.008140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.008207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.008454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.008518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.008789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.008854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.009110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.009175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.009381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.009446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.009752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.009818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.010058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.010122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.010402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.010466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.010777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.010842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.011096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.011160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.011444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.011509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.011880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.011945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.012198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.012262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.012538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.012621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.012908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.012943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.013089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.013125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.013285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.013351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.013644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.013722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.014021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.014085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.014374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.014439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.014644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.014709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.015006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.015070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.015310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.015377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.015675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.015739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.015942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.016007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.690 qpair failed and we were unable to recover it. 00:26:10.690 [2024-12-10 04:14:05.016291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.690 [2024-12-10 04:14:05.016356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.016601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.016668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.016892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.016971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.017258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.017323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.017621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.017687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.017928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.017992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.018287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.018352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.018636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.018702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.018994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.019058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.019338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.019403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.019615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.019682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.019927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.019991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.020238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.020302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.020542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.020623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.020876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.020941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.021241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.021306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.021596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.021662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.021915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.021979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.022274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.022338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.022637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.022703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.022954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.023020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.023300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.023365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.023612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.023678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.023984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.024049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.024280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.024344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.024599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.024635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.024778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.024813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.025069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.025132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.025372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.025436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.025711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.025777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.026091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.026155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.026446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.026510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.026785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.026860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.027101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.027166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.027355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.027419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.027699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.027766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.028010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.028077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.028327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.028392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.691 [2024-12-10 04:14:05.028605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.691 [2024-12-10 04:14:05.028671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.691 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.028970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.029034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.029320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.029384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.029582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.029647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.029851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.029917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.030165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.030229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.030448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.030512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.030765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.030829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.031091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.031156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.031366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.031429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.031711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.031778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.032025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.032092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.032369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.032433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.032630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.032665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.032814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.032849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.032962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.032996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.033141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.033176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.033280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.033315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.033428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.033463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.033631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.033666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.692 [2024-12-10 04:14:05.033803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.692 [2024-12-10 04:14:05.033835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.692 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.033972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.034005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.034124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.034158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.034302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.034334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.034463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.034498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.034680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.034715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.034821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.034854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.034984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.035017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.035123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.035156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.035264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.035298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.035440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.035472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.035588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.035622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.035728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.035760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.035897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.035930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.036037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.036077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.036194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.036227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.036337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.036371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.036507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.036540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.036688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.036720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.036860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.036893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.037027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.037061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.037200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.037232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.037425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.037460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.037603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.037637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.037775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.037809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.038044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.038109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.038343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.038413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.038606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.038641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.038816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.038849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.039078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.039112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.039253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.039286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.039509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.039602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.039772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.039806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.040066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.040100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.040236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.040269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.040480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.040566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.040710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.040744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.040888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.040921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.041067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.041101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.041239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.041274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.041477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.041511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.041647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.041699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.041815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.969 [2024-12-10 04:14:05.041851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.969 qpair failed and we were unable to recover it. 00:26:10.969 [2024-12-10 04:14:05.042097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.042164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.042418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.042483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.042682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.042716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.042831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.042866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.043019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.043083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.043295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.043359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.043617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.043651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.043758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.043792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.044034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.044068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.044170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.044203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.044334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.044411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.044653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.044688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.044813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.044846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.045055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.045118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.045376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.045439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.045744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.045777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.045970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.046003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.046138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.046172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.046276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.046309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.046521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.046608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.046754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.046787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.046899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.046934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.047181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.047214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.047355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.047388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.047596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.047647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.047760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.047799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.047990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.048045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.048160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.048194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.048394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.048458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.048654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.048688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.048791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.048824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.049007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.049072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.049313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.049376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.049600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.049635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.049746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.049780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.049953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.050016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.050225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.050289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.050524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.050566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.050682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.970 [2024-12-10 04:14:05.050715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.970 qpair failed and we were unable to recover it. 00:26:10.970 [2024-12-10 04:14:05.050828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.050861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.050978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.051011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.051177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.051240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.051488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.051572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.051713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.051746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.051997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.052060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.052309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.052342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.052455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.052489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.052641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.052674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.052793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.052825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.052968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.053002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.053130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.053162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.053320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.053356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.053589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.053629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.053722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.053755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.053891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.053923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.054029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.054062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.054174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.054207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.054304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.054336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.054445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.054477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.054590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.054622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.054738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.054771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.054924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.054987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.055251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.055290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.055409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.055443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.055613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.055646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.055765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.055798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.056029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.056093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.056284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.056348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.056594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.056629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.056771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.056804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.056942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.056974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.057109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.057142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.057275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.057308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.057407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.057439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.057576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.057610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.057726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.057758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.057866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.057898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.971 [2024-12-10 04:14:05.058112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.971 [2024-12-10 04:14:05.058145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.971 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.058243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.058275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.058412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.058448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.058621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.058655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.058799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.058833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.058971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.059003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.059235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.059267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.059373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.059405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.059586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.059619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.059785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.059817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.059993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.060049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.060241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.060299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.060491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.060557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.060675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.060707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.060840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.060873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.060985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.061017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.061190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.061249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.061472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.061530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.061717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.061749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.061929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.061962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.062070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.062102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.062261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.062317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.062527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.062583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.062701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.062735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.062854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.062888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.063034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.063093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.063310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.063367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.063563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.063616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.063735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.063769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.063876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.063908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.064049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.064082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.064222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.064255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.064431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.064463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.064603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.064638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.064788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.064821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.065046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.065104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.065318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.065384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.065612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.065647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.065785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.972 [2024-12-10 04:14:05.065816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.972 qpair failed and we were unable to recover it. 00:26:10.972 [2024-12-10 04:14:05.065973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.066032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.066276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.066308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.066474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.066506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.066690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.066752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.067020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.067076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.067293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.067361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.067617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.067653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.067794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.067837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.068054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.068117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.068308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.068367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.068585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.068620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.068767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.068804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.069067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.069101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.069245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.069280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.069420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.069456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.069749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.069813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.070003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.070065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.070182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.070233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.070363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.070398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.070606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.070669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.070928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.070971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.071118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.071154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.071390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.071451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.071654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.071732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.072020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.072084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.072337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.072404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.072650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.072714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.072980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.073019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.073185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.073221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.073418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.073505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.073795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.073859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.074060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.074124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.074422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.074487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.074794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.973 [2024-12-10 04:14:05.074864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.973 qpair failed and we were unable to recover it. 00:26:10.973 [2024-12-10 04:14:05.075103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.075138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.075285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.075319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.075476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.075592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.075792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.075878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.076179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.076245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.076466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.076599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.076844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.076911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.077141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.077174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.077278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.077312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.077432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.077464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.077646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.077680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.077824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.077864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.078005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.078042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.078171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.078206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.078425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.078470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.078624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.078659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.078800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.078834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.078950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.078989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.079114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.079149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.079292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.079326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.079495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.079531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.079706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.079759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.079935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.079970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.080083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.080116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.080235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.080268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.080378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.080466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.080669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.080703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.080818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.080850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.081018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.081052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.081191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.081224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.081404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.081458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.081657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.081690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.081799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.081833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.081962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.081994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.082165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.082197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.082306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.974 [2024-12-10 04:14:05.082371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.974 qpair failed and we were unable to recover it. 00:26:10.974 [2024-12-10 04:14:05.082612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.082646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.082796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.082829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.082970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.083003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.083118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.083150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.083257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.083291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.083405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.083437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.083561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.083594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.083711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.083743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.083880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.083912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.084051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.084084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.084219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.084251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.084386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.084448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.084625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.084657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.084783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.084816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.084926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.084959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.085072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.085105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.085237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.085270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.085405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.085437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.085587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.085621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.085726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.085758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.085883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.085916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.086058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.086092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.086235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.086267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.086375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.086408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.086516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.086558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.086701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.086734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.086858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.086890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.086990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.087024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.087167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.087204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.087373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.087405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.087538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.087581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.087723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.087755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.087893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.087926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.088062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.088095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.088210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.088242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.088376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.088409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.088555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.088589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.088737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.088769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.088875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.088908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.975 qpair failed and we were unable to recover it. 00:26:10.975 [2024-12-10 04:14:05.089031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.975 [2024-12-10 04:14:05.089063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.089206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.089239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.089344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.089376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.089524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.089567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.089709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.089742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.089879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.089911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.090018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.090051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.090178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.090211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.090327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.090360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.090515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.090589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.090707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.090749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.090898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.090934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.091043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.091077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.091186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.091223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.091378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.091414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.091529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.091576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.091697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.091738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.091887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.091924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.092026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.092060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.092198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.092233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.092383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.092419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.092531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.092576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.092748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.092782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.092900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.092936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.093105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.093140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.093245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.093279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.093415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.093457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.093599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.093635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.093750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.093785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.093970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.094005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.094157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.094192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.094343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.094381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.094524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.094568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.094687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.094722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.094838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.094881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.095000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.095035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.095207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.095241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.095377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.095419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.976 [2024-12-10 04:14:05.095528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.976 [2024-12-10 04:14:05.095578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.976 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.095697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.095731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.095847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.095881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.096030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.096067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.096224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.096258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.096408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.096450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.096612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.096648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.096764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.096798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.096940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.096980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.097202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.097236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.097340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.097374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.097497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.097533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.097688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.097723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.097879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.097914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.098069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.098105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.098275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.098309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.098454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.098494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.098659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.098695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.098817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.098859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.098970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.099006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.099132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.099167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.099332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.099366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.099474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.099510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.099706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.099741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.099911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.099946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.100056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.100092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.100262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.100297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.100437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.100470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.100654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.100706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.100859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.100895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.101011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.101044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.101144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.101176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.101330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.101364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.101476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.101508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.101629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.101668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.977 qpair failed and we were unable to recover it. 00:26:10.977 [2024-12-10 04:14:05.101786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.977 [2024-12-10 04:14:05.101821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.101950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.101984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.102101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.102137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.102282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.102317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.102464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.102506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.102617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.102653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.102789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.102821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.102929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.102961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.103099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.103132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.103243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.103275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.103449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.103482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.103605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.103643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.103806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.103841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.104009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.104045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.104151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.104185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.104353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.104387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.104529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.104597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.104730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.104765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.104871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.104903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.105012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.105045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.105155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.105189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.105339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.105371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.105508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.105540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.105713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.105746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.105866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.105900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.106011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.106044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.106147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.106180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.106277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.106310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.106530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.106577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.106685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.106720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.106829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.106864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.106972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.107007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.107189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.107224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.107397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.107432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.107581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.107624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.107742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.107777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.107919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.107954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.108131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.108167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.108268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.978 [2024-12-10 04:14:05.108302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.978 qpair failed and we were unable to recover it. 00:26:10.978 [2024-12-10 04:14:05.108447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.108481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.108648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.108699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.108826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.108863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.108966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.109000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.109140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.109174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.109315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.109349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.109536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.109594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.109711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.109745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.109893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.109926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.110036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.110071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.110182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.110220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.110348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.110405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.110539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.110596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.110761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.110796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.110888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.110921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.111056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.111090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.111229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.111262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.111407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.111440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.111572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.111607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.111743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.111776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.111938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.111972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.112081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.112115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.112255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.112288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.112462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.112495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.112608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.112643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.112780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.112832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.113009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.113059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.113235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.113276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.113391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.113426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.113609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.113653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.113799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.113834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.113945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.113980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.114124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.114161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.114293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.114328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.114472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.114505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.114693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.114731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.114941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.114975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.115113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.115156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.115311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.115354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.979 qpair failed and we were unable to recover it. 00:26:10.979 [2024-12-10 04:14:05.115473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.979 [2024-12-10 04:14:05.115508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.115663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.115701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.115861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.115896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.116041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.116075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.116211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.116254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.116477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.116513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.116683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.116718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.116856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.116891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.117031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.117065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.117200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.117235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.117359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.117394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.117537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.117584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.117754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.117789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.117951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.117985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.118121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.118155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.118325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.118363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.118497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.118531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.118692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.118727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.118838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.118875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.119001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.119036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.119174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.119207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.119353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.119389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.119542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.119589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.119698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.119733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.119836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.119881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.119989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.120026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.120176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.120210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.120327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.120361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.120541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.120588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.120719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.120754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.120889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.120926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.121106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.121141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.121250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.121284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.121388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.121427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.121539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.121586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.121730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.121764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.121932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.121970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.122082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.980 [2024-12-10 04:14:05.122116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.980 qpair failed and we were unable to recover it. 00:26:10.980 [2024-12-10 04:14:05.122231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.122265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.122435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.122477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.122613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.122649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.122782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.122817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.122967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.123003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.123143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.123178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.123318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.123352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.123468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.123503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.123619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.123654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.123817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.123851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.123965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.124008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.124182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.124216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.124357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.124391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.124582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.124618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.124732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.124766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.124875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.124909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.125060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.125096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.125241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.125274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.125421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.125460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.125620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.125655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.125774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.125808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.125977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.126013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.126230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.126264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.126405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.126447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.126611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.126647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.126789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.126823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.126934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.126969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.127099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.127134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.127259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.127310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.127431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.127467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.127610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.127646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.127765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.127800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.127942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.127976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.128077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.128111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.128233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.128270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.128376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.128411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.128561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.128598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.981 [2024-12-10 04:14:05.128717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.981 [2024-12-10 04:14:05.128752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.981 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.128861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.128896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.129051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.129087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.129258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.129293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.129419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.129477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.129632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.129669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.129807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.129841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.129949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.129982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.130093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.130126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.130259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.130292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.130391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.130424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.130556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.130590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.130697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.130730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.130861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.130894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.131000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.131034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.131171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.131204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.131315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.131348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.131463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.131496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.131627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.131661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.131759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.131792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.131895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.131928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.132043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.132078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.132221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.132254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.132351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.132384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.132487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.132520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.132642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.132680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.132821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.132864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.132978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.133014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.133119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.133153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.133289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.133323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.133456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.133491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.133615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.133656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.133770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.133803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.982 [2024-12-10 04:14:05.133946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.982 [2024-12-10 04:14:05.133979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.982 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.134117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.134149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.134294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.134327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.134437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.134470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.134631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.134669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.134818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.134853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.134963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.134998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.135140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.135174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.135305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.135340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.135449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.135488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.135637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.135672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.135810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.135844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.135969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.136002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.136140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.136173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.136309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.136343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.136438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.136471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.136590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.136623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.136766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.136798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.136927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.136961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.137066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.137098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.137204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.137237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.137381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.137413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.137576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.137629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.137749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.137785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.137931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.137970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.138134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.138177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.138317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.138351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.138505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.138541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.138703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.138737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.138837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.138869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.138981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.139013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.139150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.139183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.139287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.139320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.139462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.139495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.139655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.139693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.139870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.139913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.140026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.983 [2024-12-10 04:14:05.140061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.983 qpair failed and we were unable to recover it. 00:26:10.983 [2024-12-10 04:14:05.140202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.140237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.140403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.140439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.140573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.140609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.140779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.140814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.140960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.141011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.141145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.141183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.141328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.141365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.141476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.141509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.141640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.141680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.141805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.141839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.141964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.141997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.142134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.142170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.142282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.142316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.142419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.142459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.142605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.142646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.142752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.142793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.142910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.142945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.143055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.143088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.143232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.143267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.143374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.143408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.143560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.143595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.143728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.143762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.143872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.143906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.144017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.144051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.144161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.144197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.144313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.144348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.144500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.144534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.144650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.144683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.144855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.144888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.144999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.145041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.145180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.145214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.145324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.145358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.145495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.145530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.145646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.145680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.145819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.145853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.145989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.146024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.146135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.146169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.984 [2024-12-10 04:14:05.146310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.984 [2024-12-10 04:14:05.146344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.984 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.146478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.146513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.146667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.146702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.146818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.146859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.147002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.147036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.147211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.147244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.147340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.147380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.147503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.147537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.147660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.147694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.147805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.147840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.147989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.148023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.148127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.148161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.148269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.148310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.148468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.148503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.148649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.148683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.148821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.148855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.148998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.149033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.149173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.149207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.149352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.149391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.149563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.149597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.149734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.149767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.149913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.149947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.150045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.150078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.150187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.150220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.150323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.150357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.150493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.150526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.150678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.150712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.150818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.150853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.150993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.151026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.151173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.151208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.151345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.151379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.151517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.151556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.151702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.151736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.151860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.151911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.152040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.152078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.152247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.152281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.152385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.152419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.152524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.152568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.985 qpair failed and we were unable to recover it. 00:26:10.985 [2024-12-10 04:14:05.152669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.985 [2024-12-10 04:14:05.152703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.152810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.152843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.152957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.152992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.153091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.153125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.153230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.153264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.153438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.153472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.153644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.153695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.153816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.153853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.153971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.154005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.154111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.154144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.154279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.154313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.154448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.154482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.154647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.154682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.154793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.154826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.154930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.154966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.155109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.155142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.155271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.155305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.155450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.155483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.155615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.155649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.155787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.155820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.155950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.155988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.156094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.156128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.156307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.156341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.156507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.156540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.156650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.156684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.156820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.156853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.156986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.157019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.157126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.157159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.157265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.157298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.157429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.157462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.157606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.157645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.157760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.157794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.157892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.157926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.158070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.158105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.158252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.158286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.158423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.158457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.158584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.158619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.986 qpair failed and we were unable to recover it. 00:26:10.986 [2024-12-10 04:14:05.158760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.986 [2024-12-10 04:14:05.158794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.158903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.158937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.159083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.159116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.159220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.159253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.159383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.159417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.159560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.159596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.159730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.159764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.159873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.159909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.160046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.160081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.160213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.160248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.160422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.160456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.160571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.160606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.160727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.160761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.160879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.160913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.161056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.161089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.161227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.161261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.161426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.161460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.161584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.161622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.161741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.161791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.161917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.161952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.162134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.162168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.162299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.162331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.162469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.162502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.162658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.162700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.162839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.162873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.163041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.163076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.163219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.163253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.163392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.163428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.163569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.163604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.163705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.163739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.163847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.163880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.163996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.164030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.164127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.164161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.987 [2024-12-10 04:14:05.164302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.987 [2024-12-10 04:14:05.164338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.987 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.164509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.164554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.164698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.164732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.164858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.164892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.165040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.165075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.165214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.165247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.165356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.165391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.165495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.165528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.165679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.165714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.165829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.165863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.166005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.166038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.166207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.166241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.166345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.166381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.166497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.166531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.166674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.166724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.166842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.166876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.167011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.167044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.167236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.167271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.167379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.167416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.167540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.167581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.167723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.167757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.167871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.167904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.168031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.168064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.168198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.168232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.168335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.168370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.168512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.168565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.168683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.168718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.168855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.168889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.169023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.169056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.169195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.169228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.169359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.169398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.169577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.169611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.169719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.169753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.169895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.169928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.170058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.170090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.170193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.170227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.170364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.170398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.170530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.170574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.170675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.988 [2024-12-10 04:14:05.170710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.988 qpair failed and we were unable to recover it. 00:26:10.988 [2024-12-10 04:14:05.170850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.170887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.171002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.171036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.171176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.171210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.171375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.171410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.171520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.171560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.171675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.171709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.171827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.171863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.172004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.172037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.172144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.172178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.172290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.172323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.172434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.172467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.172605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.172656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.172838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.172874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.172988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.173020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.173165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.173200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.173347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.173382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.173522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.173566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.173681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.173715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.173901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.173941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.174078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.174111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.174282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.174316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.174484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.174518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.174666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.174700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.174807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.174848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.174979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.175013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.175159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.175193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.175336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.175370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.175493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.175557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.175708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.175743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.175854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.175890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.176029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.176064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.176235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.176269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.176385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.176418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.176575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.176610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.176724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.176760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.176904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.176937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.177050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.177084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.177229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.177263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.989 [2024-12-10 04:14:05.177361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-12-10 04:14:05.177395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.989 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.177537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.177577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.177678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.177713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.177874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.177924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.178069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.178105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.178242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.178276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.178419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.178453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.178588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.178622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.178792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.178824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.178944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.178978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.179143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.179175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.179309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.179343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.179483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.179519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.179680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.179731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.179882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.179917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.180012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.180045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.180195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.180228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.180372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.180405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.180580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.180627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.180737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.180771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.180938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.180977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.181077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.181111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.181254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.181287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.181398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.181431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.181536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.181578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.181714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.181747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.181891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.181924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.182062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.182095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.182262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.182295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.182462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.182495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.182605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.182639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.182782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.182816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.182984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.183017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.183146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.183179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.183316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.183350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.183490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.183524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.183703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.183737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.183867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.183919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.990 qpair failed and we were unable to recover it. 00:26:10.990 [2024-12-10 04:14:05.184044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.990 [2024-12-10 04:14:05.184080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.184250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.184285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.184432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.184467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.184607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.184643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.184760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.184794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.184963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.184997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.185141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.185177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.185349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.185384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.185569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.185605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.185712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.185747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.185913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.185947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.186085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.186118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.186256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.186290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.186430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.186463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.186618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.186654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.186801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.186835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.186981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.187015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.187179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.187213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.187317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.187350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.187486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.187520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.187665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.187699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.187806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.187840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.187982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.188022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.188162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.188195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.188335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.188369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.188510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.188554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.188728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.188762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.188871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.188905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.189019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.189053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.189156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.189189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.189326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.189361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.189507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.189543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.189709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.189743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.189883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.991 [2024-12-10 04:14:05.189916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.991 qpair failed and we were unable to recover it. 00:26:10.991 [2024-12-10 04:14:05.190031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.190064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.190210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.190243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.190387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.190420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.190584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.190619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.190717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.190752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.190851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.190885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.191055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.191089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.191229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.191264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.191433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.191467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.191634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.191668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.191806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.191840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.191966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.192000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.192165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.192198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.192346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.192383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.192521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.192564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.192688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.192722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.192854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.192888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.193018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.193052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.193160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.193193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.193334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.193367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.193533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.193581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.193756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.193789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.193924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.193959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.194065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.194098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.194233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.194268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.194385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.194419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.194579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.194613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.194783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.194816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.194932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.194971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.195085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.195118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.195252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.195287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.195426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.195460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.195625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.195659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.195794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.195827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.195962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.195995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.196130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.196163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.196301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.196334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.196490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.196542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.196705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.196740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.992 qpair failed and we were unable to recover it. 00:26:10.992 [2024-12-10 04:14:05.196856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.992 [2024-12-10 04:14:05.196893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.197029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.197063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.197167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.197203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.197313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.197348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.197482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.197516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.197699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.197734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.197870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.197904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.198070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.198104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.198246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.198281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.198450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.198484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.198674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.198710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.198849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.198884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.198996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.199030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.199203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.199236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.199369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.199403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.199543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.199586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.199735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.199772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.199866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.199900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.200068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.200102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.200209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.200244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.200388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.200421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.200562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.200596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.200739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.200773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.200910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.200944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.201072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.201106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.201244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.201278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.201411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.201445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.201565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.201599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.201742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.201776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.201911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.201952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.202092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.202128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.202292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.202326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.202491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.202525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.202687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.202721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.202867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.202905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.203019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.203053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.203150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.203205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.203377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.203420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.203568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.203612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.993 [2024-12-10 04:14:05.203746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.993 [2024-12-10 04:14:05.203789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.993 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.203965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.204008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.204150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.204195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.204410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.204452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.204641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.204686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.204865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.204908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.205116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.205159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.205331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.205373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.205574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.205618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.205777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.205820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.205986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.206029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.206194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.206238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.206405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.206448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.206590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.206634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.206812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.206855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.207024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.207067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.207282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.207325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.207510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.207562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.207733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.207776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.207943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.207986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.208122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.208165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.208310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.208352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.208563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.208608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.208741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.208787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.208925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.208969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.209138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.209183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.209333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.209376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.209497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.209539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.209703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.209747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.209920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.209962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.210089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.210138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.210272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.210317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.210520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.210576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.210780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.210824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.211025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.211068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.211246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.211288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.211431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.211474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.211652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.211696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.211860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.211902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.994 [2024-12-10 04:14:05.212042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.994 [2024-12-10 04:14:05.212085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.994 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.212272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.212317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.212488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.212533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.212689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.212735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.212910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.212956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.213138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.213184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.213368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.213411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.213618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.213662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.213833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.213876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.214002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.214044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.214248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.214290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.214455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.214497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.214681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.214724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.214893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.214936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.215084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.215127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.215266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.215309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.215531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.215585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.215731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.215777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.215958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.216005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.216217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.216262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.216424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.216469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.216657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.216701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.216866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.216910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.217075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.217118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.217299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.217342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.217473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.217515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.217739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.217783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.217986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.218028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.218207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.218250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.218375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.218418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.218573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.218617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.218751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.218801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.218945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.218988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.219153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.219196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.219402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.219445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.219575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.219620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.219783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.219826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.220003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.220045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.220180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.220222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.220371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.220413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.220588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.220632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.220775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.995 [2024-12-10 04:14:05.220819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.995 qpair failed and we were unable to recover it. 00:26:10.995 [2024-12-10 04:14:05.220984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.221027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.221193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.221236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.221382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.221425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.221637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.221680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.221886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.221928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.222120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.222165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.222306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.222351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.222495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.222540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.222769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.222814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.222995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.223040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.223209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.223255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.223394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.223441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.223663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.223710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.223925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.223970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.224182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.224226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.224404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.224449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.224624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.224670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.224840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.224885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.225020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.225065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.225246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.225292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.225503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.225557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.225716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.225762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.225938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.225983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.226135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.226180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.226321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.226367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.226572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.226619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.226793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.226838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.227014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.227060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.227234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.227280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.227451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.227496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.227697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.227744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.227895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.227941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.228117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.228164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.228355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.228400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.228570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.996 [2024-12-10 04:14:05.228651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.996 qpair failed and we were unable to recover it. 00:26:10.996 [2024-12-10 04:14:05.228829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.228896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.229122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.229186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.229448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.229512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.229732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.229776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.229959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.230004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.230185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.230230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.230440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.230486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.230676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.230724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.230915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.230961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.231108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.231154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.231331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.231376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.231555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.231605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.231830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.231877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.232072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.232119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.232273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.232323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.232567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.232616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.232821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.232866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.233063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.233108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.233284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.233328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.233490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.233535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.233751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.233799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.233996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.234051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.234201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.234249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.234402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.234449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.234635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.234684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.234846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.234890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.235100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.235145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.235295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.235339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.235487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.235531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.235718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.235763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.235974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.236019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.236165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.236212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.236420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.236464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.236618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.236664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.236815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.236861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.237051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.237097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.237240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.237286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.237528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.237612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.237802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.237849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.238082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.238129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.238313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.238361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.238586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.238636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.238800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.238848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.239032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.997 [2024-12-10 04:14:05.239080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.997 qpair failed and we were unable to recover it. 00:26:10.997 [2024-12-10 04:14:05.239237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.239285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.239468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.239516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.239707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.239756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.239916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.239965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.240169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.240218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.240414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.240461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.240676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.240726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.240951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.240998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.241199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.241247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.241446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.241494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.241666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.241716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.241866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.241914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.242049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.242097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.242281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.242330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.242494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.242542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.242772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.242820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.243016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.243064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.243242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.243298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.243526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.243621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.243823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.243872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.244051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.244100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.244278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.244325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.244461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.244508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.244750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.244798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.244987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.245034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.245260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.245308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.245485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.245533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.245743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.245790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.245947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.245997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.246216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.246264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.246451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.246500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.246704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.246753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.246885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.246933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.247090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.247138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.247356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.247403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.247565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.247614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.247772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.247820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.247962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.248011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.248174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.248224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.248407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.248455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.248608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.248658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.248837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.248885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.249154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.249201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.249463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.249523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.249833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.249911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.998 [2024-12-10 04:14:05.250205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.998 [2024-12-10 04:14:05.250281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.998 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.250607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.250657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.250879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.250927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.251150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.251198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.251401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.251449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.251627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.251675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.251841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.251888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.252075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.252123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.252307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.252354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.252594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.252647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.252865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.252917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.253080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.253131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.253347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.253407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.253608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.253662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.253856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.253907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.254143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.254193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.254442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.254490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.254693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.254741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.254924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.254973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.255201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.255249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.255474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.255521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.255684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.255734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.255922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.255969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.256138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.256185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.256365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.256416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.256606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.256658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.256859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.256911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.257118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.257169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.257442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.257499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.257768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.257820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.258020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.258071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.258265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.258317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.258504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.258573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.258787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.258840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.258988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.259063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.259319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.259369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.259614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.259666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.259906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.259957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.260190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.260240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.260467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.260518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.260747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.260798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.260971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.261024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.261220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.261273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.261478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.261530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:10.999 [2024-12-10 04:14:05.261747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.999 [2024-12-10 04:14:05.261797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:10.999 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.262027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.262077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.262321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.262373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.262611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.262663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.262833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.262886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.263100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.263152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.263370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.263428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.263617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.263669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.263879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.263938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.264114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.264167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.264365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.264417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.264594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.264646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.264798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.264851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.265086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.265138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.265336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.265386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.265617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.265669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.265911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.265963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.266119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.266170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.266324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.266377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.266619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.266672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.266907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.266958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.267152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.267205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.267455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.267507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.267717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.267768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.267927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.267978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.268211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.268261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.268496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.268556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.268760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.268810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.269018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.269069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.269241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.269291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.269483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.269535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.269722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.269773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.269975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.270026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.270231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.270282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.270440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.270491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.270741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.270794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.271057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.271115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.271405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.271463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.271688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.271765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.272006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.272083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.272336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.272395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.272671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.272748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.273003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.273080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.000 [2024-12-10 04:14:05.273325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.000 [2024-12-10 04:14:05.273383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.000 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.273631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.273710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.273930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.274004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.274249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.274308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.274531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.274620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.274838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.274897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.275135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.275185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.275381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.275433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.275667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.275719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.275920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.275970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.276202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.276252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.276444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.276495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.276752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.276807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.277025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.277079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.277289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.277345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.277600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.277655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.277869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.277926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.278136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.278192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.278441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.278495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.278688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.278744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.278988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.279042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.279211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.279265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.279518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.279582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.279836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.279890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.280109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.280163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.280411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.280466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.280650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.280705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.280895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.280949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.281195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.281250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.281426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.281479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.281706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.281762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.281974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.282031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.282266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.282321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.282524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.282604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.282864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.282919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.283142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.283214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.283437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.283495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.283731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.283779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.283933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.283990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.284134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.284181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.284336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.284385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.284634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.284707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.284929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.284986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.285199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.285256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.285489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.285555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.285729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.285794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.286022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.286077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.286242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.286296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.001 qpair failed and we were unable to recover it. 00:26:11.001 [2024-12-10 04:14:05.286540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.001 [2024-12-10 04:14:05.286617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.286809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.286865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.287039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.287094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.287266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.287320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.287481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.287535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.287767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.287822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.288026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.288081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.288270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.288324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.288591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.288650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.288865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.288922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.289141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.289195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.289395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.289465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.289672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.289729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.289923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.289979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.290158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.290213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.290448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.290508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.290752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.290807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.290982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.291036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.291222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.291279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.291508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.291576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.291760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.291816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.292066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.292119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.292332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.292386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.292603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.292659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.292832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.292889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.293078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.293133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.293396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.293452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.293631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.293687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.293910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.293966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.294143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.294200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.294388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.294458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.294687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.294745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.294920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.294977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.295158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.295213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.295424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.295479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.295694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.295751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.295948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.296014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.296230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.296297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.296468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.296526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.296881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.296938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.297190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.297246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.297500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.297602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.297798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.297855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.298063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.298119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.298329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.298382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.298601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.298657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.298878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.298933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.299103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.002 [2024-12-10 04:14:05.299159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.002 qpair failed and we were unable to recover it. 00:26:11.002 [2024-12-10 04:14:05.299352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.299409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.299628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.299685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.299935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.299990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.300191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.300247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.300432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.300486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.300751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.300819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.301028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.301086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.301323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.301382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.301592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.301652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.301915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.301973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.302200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.302261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.302459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.302514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.302769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.302829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.303048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.303104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.303315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.303370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.303589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.303647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.303853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.303908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.304120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.304178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.304433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.304499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.304745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.304800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.304989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.305046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.305249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.305304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.305497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.305563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.305781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.305839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.306023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.306080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.306297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.306352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.306542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.306625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.306808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.306865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.307069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.307125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.307394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.307464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.307768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.307855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.308092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.308151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.308454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.308520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.308786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.308842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.309022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.309089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.309339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.309399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.309626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.309686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.309905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.309979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.310238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.310294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.310537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.310647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.310888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.310949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.311168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.311228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.311423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.311483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.003 qpair failed and we were unable to recover it. 00:26:11.003 [2024-12-10 04:14:05.311748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.003 [2024-12-10 04:14:05.311812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.312060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.312121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.312322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.312382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.312610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.312676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.312940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.313001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.313229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.313289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.313563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.313627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.313814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.313874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.314104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.314164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.314358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.314429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.314689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.314752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.314992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.315052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.315321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.315384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.315609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.315671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.315946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.316019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.316247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.316310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.316527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.316614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.316821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.316887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.317151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.317211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.317438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.317498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.317724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.317801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.318066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.318128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.318393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.318455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.318752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.318816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.319047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.319107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.319345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.319405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.319640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.319716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.319990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.320050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.320329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.320405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.320694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.320756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.321024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.321084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.321339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.321403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.321714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.321776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.322026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.322101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.322411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.322477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.322758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.322821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.323128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.323197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.323407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.323474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.323728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.323791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.323980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.324051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.324331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.324392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.324627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.324689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.324927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.325002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.325241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.325304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.325593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.325655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-12-10 04:14:05.325849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.004 [2024-12-10 04:14:05.325911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.326114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.326176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.326371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.326433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.326660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.326737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.326983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.327045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.327276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.327336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.327560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.327624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.327853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.327916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.328119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.328181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.328417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.328480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.328774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.328838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.329051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.329111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.329310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.329396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.329626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.329690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.329925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.329985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.330210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.330282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.330535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.330638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.330887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.330947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.331216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.331294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.331496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.331576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.331792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.331853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-12-10 04:14:05.332050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.005 [2024-12-10 04:14:05.332143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.332457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.332527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.332829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.332896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.333148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.333216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.333466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.333532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.333818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.333891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.334160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.334228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.334481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.334570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.334851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.334932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.335163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.335228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.335478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.335565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.335838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.335921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.336191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.336258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.336470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.336535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.336799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.336866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.337090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.337160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.337381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.337449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.337759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.337827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.338079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.338148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.338403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.338469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.338701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.338768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.339051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.339121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.339410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.339474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.339746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.339828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.340069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.280 [2024-12-10 04:14:05.340137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.280 qpair failed and we were unable to recover it. 00:26:11.280 [2024-12-10 04:14:05.340396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.340460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.340697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.340772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.341027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.341094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.341329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.341393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.341632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.341708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.341989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.342056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.342247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.342311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.342576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.342643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.342927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.342994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.343294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.343359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.343616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.343697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.343969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.344036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.344323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.344387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.344661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.344730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.344945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.345010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.345255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.345332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.345539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.345632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.345952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.346018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.346254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.346321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.346580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.346649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.346941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.347008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.347303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.347380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.347664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.347735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.348027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.348092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.348305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.348385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.348693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.348761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.348967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.349034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.349256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.349323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.349560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.349631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.349933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.350000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.350245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.350313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.350532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.350616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.350858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.350923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.351167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.351252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.351484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.351574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.352649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.352685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.352849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.352882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.353061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.353122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.353255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.281 [2024-12-10 04:14:05.353286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.281 qpair failed and we were unable to recover it. 00:26:11.281 [2024-12-10 04:14:05.353405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.353436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.353532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.353578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.353779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.353833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.354061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.354114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.354215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.354246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.354380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.354410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.354553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.354589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.354780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.354832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.354986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.355042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.355183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.355213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.355343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.355375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.355515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.355557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.355661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.355691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.355799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.355829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.355927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.355965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.356097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.356127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.356230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.356274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.356401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.356437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.356576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.356607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.356733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.356764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.356891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.356923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.357022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.357052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.357145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.357175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.357285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.357322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.357458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.357488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.357644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.357675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.357811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.357843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.357970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.358000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.358098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.358128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.358270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.358302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.358460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.358491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.358603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.358636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.358741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.358772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.358869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.358899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.359038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.359073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.359201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.359232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.359338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.359368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.359488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.359526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.282 [2024-12-10 04:14:05.359649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.282 [2024-12-10 04:14:05.359686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.282 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.359785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.359815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.359949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.359981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.360075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.360106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.360202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.360233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.360377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.360416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.360563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.360594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.360725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.360755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.360919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.360950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.361078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.361108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.361207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.361241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.361373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.361404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.361506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.361536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.361650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.361687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.361820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.361851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.361979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.362009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.362163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.362196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.362322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.362352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.362485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.362524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.362673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.362704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.362831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.362861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.362976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.363007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.363141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.363172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.363305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.363337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.363477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.363509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.363625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.363656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.363782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.363813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.363941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.363971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.364083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.364113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.364218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.364254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.364358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.364390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.364515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.364565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.364725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.364756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.364864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.364894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.365017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.365049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.365195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.365225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.365347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.365378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.365540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.365584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.365720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.365751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.283 [2024-12-10 04:14:05.365889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.283 [2024-12-10 04:14:05.365927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.283 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.366055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.366085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.366214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.366244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.366361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.366395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.366487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.366517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.366625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.366657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.366771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.366807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.366925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.366955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.367056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.367090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.367208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.367237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.367333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.367363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.367457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.367486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.367635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.367668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.367802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.367831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.367929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.367965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.368097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.368127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.368253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.368282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.368383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.368419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.368558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.368588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.368718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.368748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.368887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.368917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.369082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.369111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.369209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.369247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.369401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.369430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.369561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.369592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.369728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.369759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.369866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.369895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.369999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.370031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.370169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.370199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.370322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.370351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.370473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.370510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.284 qpair failed and we were unable to recover it. 00:26:11.284 [2024-12-10 04:14:05.370636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.284 [2024-12-10 04:14:05.370667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.370789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.370819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.370922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.370950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.371108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.371138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.371259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.371289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.371383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.371413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.371565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.371597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.371698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.371728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.371862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.371892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.372026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.372056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.372181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.372213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.372324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.372355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.372457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.372485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.372593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.372623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.372751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.372782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.372873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.372917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.373047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.373078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.373224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.373254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.373359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.373388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.373512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.373563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.373693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.373722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.373874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.373909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.374023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.374053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.374152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.374181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.374279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.374307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.374436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.374467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.374571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.374601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.374734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.374764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.374891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.374922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.375052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.375082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.375270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.375301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.375402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.375432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.375536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.375576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.375684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.375721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.375832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.375861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.375962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.375992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.376089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.376118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.376280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.376311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.376432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.285 [2024-12-10 04:14:05.376461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.285 qpair failed and we were unable to recover it. 00:26:11.285 [2024-12-10 04:14:05.376568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.376601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.376716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.376746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.376911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.376941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.377058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.377089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.377218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.377247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.377374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.377411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.377524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.377563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.377696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.377725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.377832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.377862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.377990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.378021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.378120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.378149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.378255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.378292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.378429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.378459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.378570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.378601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.378705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.378742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.378883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.378913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.378998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.379032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.379169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.379205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.379311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.379340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.379431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.379461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.379584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.379616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.379716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.379746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.379871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.379900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.380056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.380088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.380194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.380223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.380316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.380346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.380475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.380509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.380666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.380697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.380792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.380822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.380959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.380990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.381099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.381130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.381254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.381284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.381326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1567f30 (9): Bad file descriptor 00:26:11.286 [2024-12-10 04:14:05.381500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.381557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.381724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.381755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.381855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.381885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.381987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.382017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.382108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.382138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.382236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.382265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.286 qpair failed and we were unable to recover it. 00:26:11.286 [2024-12-10 04:14:05.382388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.286 [2024-12-10 04:14:05.382417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.382551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.382582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.382683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.382740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.382913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.382963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.383117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.383168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.383407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.383458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.383634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.383701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.383870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.383921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.384103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.384132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.384297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.384347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.384491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.384541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.384713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.384742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.384907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.384957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.385129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.385179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.385371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.385421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.385563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.385593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.385682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.385712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.385804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.385833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.386016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.386050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.386232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.386282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.386489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.386578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.386733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.386784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.387034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.387092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.387270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.387321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.387497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.387526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.387636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.387666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.387840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.387869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.388061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.388119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.388362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.388419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.388612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.388652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.388751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.388786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.388991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.389043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.389284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.389348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.389568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.389604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.389726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.389756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.389865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.389894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.390028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.390082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.390255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.390303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.390480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.390531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.287 [2024-12-10 04:14:05.390680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.287 [2024-12-10 04:14:05.390710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.287 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.390857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.390908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.391114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.391164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.391365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.391416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.391615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.391645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.391738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.391767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.391872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.391907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.392025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.392090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.392310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.392368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.392554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.392584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.392736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.392766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.392873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.392902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.393046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.393097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.393299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.393351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.393566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.393621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.393746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.393775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.393882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.393911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.394037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.394067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.394164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.394197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.394376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.394433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.394562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.394593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.394797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.394858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.395041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.395099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.395259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.395318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.395447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.395478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.395590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.395620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.395716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.395745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.395896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.395946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.396142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.396191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.396393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.396443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.396616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.396649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.396812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.396873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.397039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.397091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.397184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.397220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.397335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.397380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.397493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.397526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.397654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.397686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.397863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.397913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.398091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.398140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.288 [2024-12-10 04:14:05.398304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.288 [2024-12-10 04:14:05.398355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.288 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.398530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.398590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.398694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.398726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.398933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.398982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.399140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.399195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.399304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.399334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.399435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.399468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.399621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.399652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.399793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.399854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.400097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.400146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.400334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.400384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.400599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.400630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.400720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.400750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.400860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.400894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.401023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.401072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.401300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.401349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.401540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.401576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.401683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.401712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.401882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.401943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.402122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.402155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.402324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.402372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.402525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.402561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.402661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.402690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.402810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.402839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.402978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.403026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.403180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.403231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.403431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.403484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.403658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.403688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.403812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.403876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.404012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.404060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.404217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.404264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.404428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.404459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.404613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.404644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.404791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.404840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.289 qpair failed and we were unable to recover it. 00:26:11.289 [2024-12-10 04:14:05.405067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.289 [2024-12-10 04:14:05.405124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.405284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.405332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.405488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.405535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.405709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.405741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.405929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.405979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.406163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.406213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.406406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.406455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.406632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.406663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.406800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.406860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.407033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.407083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.407315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.407366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.407537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.407573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.407689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.407719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.407821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.407884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.408093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.408144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.408354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.408402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.408563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.408610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.408763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.408811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.409001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.409051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.409224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.409272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.409420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.409481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.409684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.409733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.409938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.409987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.410176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.410226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.410394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.410442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.410619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.410668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.410834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.410883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.411084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.411134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.411328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.411378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.411589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.411638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.411787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.411834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.411975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.412024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.412217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.412265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.412506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.412574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.412725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.412775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.412929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.412978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.413168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.413215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.413437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.413485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.413666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.290 [2024-12-10 04:14:05.413727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.290 qpair failed and we were unable to recover it. 00:26:11.290 [2024-12-10 04:14:05.413913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.413961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.414160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.414217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.414415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.414463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.414673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.414724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.414935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.414983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.415135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.415182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.415339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.415388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.415579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.415629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.415784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.415833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.416024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.416073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.416259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.416310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.416497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.416573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.416752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.416802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.416997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.417046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.417262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.417311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.417507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.417569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.417768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.417816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.418038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.418090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.418297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.418345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.418540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.418600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.418778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.418826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.419010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.419059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.419251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.419298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.419469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.419521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.419706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.419756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.419984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.420035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.420227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.420275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.420462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.420511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.420701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.420749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.420943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.420993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.421178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.421226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.421393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.421445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.421674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.421723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.421912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.421961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.422126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.422174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.422338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.422388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.422591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.422640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.422814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.422863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.423049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.291 [2024-12-10 04:14:05.423097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.291 qpair failed and we were unable to recover it. 00:26:11.291 [2024-12-10 04:14:05.423315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.423363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.423538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.423605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.423771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.423828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.424066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.424114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.424300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.424349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.424516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.424587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.424780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.424829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.425016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.425076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.425288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.425337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.425498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.425565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.425748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.425796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.425945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.425995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.426158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.426207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.426400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.426457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.426665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.426715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.426868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.426918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.427089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.427138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.427293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.427341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.427494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.427559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.427727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.427774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.427935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.427997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.428224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.428273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.428444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.428493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.428719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.428768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.428923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.428974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.429200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.429248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.429441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.429493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.429736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.429794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.430000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.430049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.430257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.430305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.430468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.430516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.430704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.430753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.430911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.430963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.431157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.431207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.431430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.431478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.431701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.431750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.431941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.431989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.432177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.432225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.292 [2024-12-10 04:14:05.432410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.292 [2024-12-10 04:14:05.432460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.292 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.432669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.432720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.432869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.432918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.433122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.433171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.433319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.433374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.433580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.433640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.433871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.433928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.434098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.434146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.434342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.434389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.434588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.434636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.434798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.434847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.435010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.435061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.435289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.435337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.435493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.435542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.435745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.435795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.436031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.436079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.436280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.436336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.436584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.436633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.436849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.436898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.437085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.437134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.437271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.437320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.437562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.437619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.437826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.437873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.438097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.438146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.438343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.438393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.438581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.438629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.438789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.438837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.438993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.439051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.439203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.439251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.439411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.439460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.439635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.439684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.439831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.439879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.440063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.440112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.293 qpair failed and we were unable to recover it. 00:26:11.293 [2024-12-10 04:14:05.440265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.293 [2024-12-10 04:14:05.440314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.440456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.440506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.440721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.440771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.440918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.440967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.441152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.441199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.441354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.441402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.441584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.441633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.441833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.441881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.442043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.442094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.442284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.442333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.442572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.442622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.442785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.442842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.443033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.443083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.443233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.443281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.443470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.443519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.443727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.443780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.444015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.444063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.444214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.444263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.444430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.444478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.444684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.444732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.444919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.444969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.445166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.445214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.445419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.445469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.445703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.445752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.445975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.446023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.446199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.446249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.446407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.446455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.446631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.446681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.446881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.446930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.447087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.447136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.447334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.447382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.447558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.447607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.447792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.447841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.448034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.448082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.448249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.448300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.448527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.448590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.448757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.448805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.448960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.449008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.449210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.294 [2024-12-10 04:14:05.449264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.294 qpair failed and we were unable to recover it. 00:26:11.294 [2024-12-10 04:14:05.449467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.449520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.449754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.449805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.450022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.450073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.450252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.450304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.450480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.450532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.450733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.450786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.450959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.451013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.451178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.451230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.451454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.451519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.451722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.451775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.452010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.452061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.452236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.452300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.452510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.452587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.452760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.452813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.452976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.453026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.453213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.453264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.453465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.453516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.453743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.453806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.453990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.454044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.454215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.454267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.454478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.454531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.454720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.454772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.454978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.455030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.455190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.455252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.455449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.455501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.455729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.455782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.455989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.456040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.456266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.456317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.456485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.456573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.456768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.456825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.457027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.457091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.457303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.457358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.457582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.457640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.457831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.457889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.458105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.458160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.458346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.458401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.458629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.458684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.458897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.458961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.295 [2024-12-10 04:14:05.459149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.295 [2024-12-10 04:14:05.459201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.295 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.459374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.459428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.459668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.459721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.459920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.459972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.460184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.460235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.460443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.460494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.460714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.460767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.460926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.460979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.461173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.461225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.461423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.461475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.461694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.461746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.461908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.461959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.462160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.462221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.462471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.462522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.462761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.462855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.463063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.463115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.463321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.463372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.463572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.463638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.463840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.463892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.464140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.464192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.464377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.464429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.464587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.464640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.464799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.464851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.465084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.465145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.465356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.465415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.465587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.465644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.465863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.465917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.466130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.466184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.466375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.466433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.466638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.466695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.466878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.466934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.467104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.467161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.467400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.467455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.467683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.467744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.467926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.467991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.468204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.468259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.468522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.468618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.468824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.468879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.469044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.469098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.296 [2024-12-10 04:14:05.469278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.296 [2024-12-10 04:14:05.469334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.296 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.469532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.469615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.469848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.469904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.470120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.470177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.470367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.470424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.470630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.470687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.470871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.470928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.471113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.471172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.471380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.471436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.471604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.471661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.471916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.471971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.472212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.472267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.472468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.472525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.472777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.472833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.473015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.473072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.473237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.473292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.473501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.473582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.473791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.473846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.474057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.474112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.474294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.474351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.474521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.474604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.474830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.474884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.475077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.475132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.475374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.475430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.475603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.475659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.475871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.475929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.476135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.476201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.476440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.476495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.476691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.476747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.476938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.476993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.477224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.477278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.477502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.477585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.477805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.477862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.478054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.478109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.478353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.478409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.478612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.478680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.478888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.478943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.479192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.479246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.479451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.479505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.297 qpair failed and we were unable to recover it. 00:26:11.297 [2024-12-10 04:14:05.479731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.297 [2024-12-10 04:14:05.479796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.480028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.480083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.480308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.480363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.480578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.480644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.480860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.480914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.481133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.481188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.481439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.481495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.481737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.481795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.481965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.482022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.482191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.482246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.482464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.482519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.482740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.482794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.482971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.483026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.483250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.483306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.483496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.483566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.483782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.483837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.484069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.484134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.484445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.484510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.484752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.484832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.485045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.485132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.485411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.485470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.485705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.485773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.486074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.486152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.486450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.486509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.486734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.486817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.487083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.487170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.487364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.487440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.487709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.487789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.488060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.488139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.488395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.488454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.488753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.488843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.489108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.489195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.489410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.489472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.489779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.489858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.298 [2024-12-10 04:14:05.490047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.298 [2024-12-10 04:14:05.490122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.298 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.490408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.490467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.490686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.490775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.490968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.491043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.491348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.491424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.491628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.491689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.491941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.492000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.492250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.492308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.492509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.492591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.492836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.492901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.493179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.493234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.493451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.493507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.493772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.493829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.494030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.494089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.494339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.494401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.494646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.494708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.494948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.495008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.495204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.495261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.495514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.495586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.495767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.495822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.496038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.496096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.496349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.496414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.496620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.496678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.496903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.496960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.497147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.497221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.497406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.497468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.497671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.497735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.497920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.497985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.498215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.498275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.498473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.498532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.498768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.498826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.499012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.499071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.499308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.499370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.499639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.499701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.499981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.500040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.500271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.500330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.500576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.500638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.500826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.500885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.501157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.501216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.299 qpair failed and we were unable to recover it. 00:26:11.299 [2024-12-10 04:14:05.501371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.299 [2024-12-10 04:14:05.501430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.501676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.501738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.502006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.502064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.502281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.502339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.502571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.502632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.502829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.502888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.503119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.503178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.503398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.503457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.503752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.503813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.504047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.504105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.504367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.504435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.504666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.504727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.504952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.505013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.505292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.505351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.505583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.505646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.505862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.505921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.506147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.506207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.506413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.506472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.506762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.506822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.507089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.507148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.507420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.507478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.507758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.507818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.508040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.508099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.508279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.508337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.508610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.508671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.508896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.508955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.509141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.509205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.509393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.509452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.509733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.509794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.509989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.510048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.510227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.510287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.510514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.510591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.510855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.510914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.511091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.511152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.511379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.511440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.511664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.511726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.511921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.511981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.512164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.512224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.512389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.300 [2024-12-10 04:14:05.512449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.300 qpair failed and we were unable to recover it. 00:26:11.300 [2024-12-10 04:14:05.512739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.512799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.513065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.513125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.513304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.513364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.513533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.513606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.513827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.513887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.514105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.514164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.514377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.514435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.514664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.514724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.514961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.515021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.515257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.515314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.515510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.515582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.515818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.515887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.516159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.516218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.516437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.516497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.516725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.516786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.517009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.517068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.517292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.517350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.517570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.517631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.517829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.517888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.518109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.518168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.518385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.518444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.518657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.518717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.518903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.518963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.519200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.519259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.519531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.519616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.519818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.519876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.520116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.520177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.520448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.520507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.520795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.520854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.521078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.521137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.521364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.521423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.521661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.521721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.521987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.522045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.522237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.522297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.522572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.522633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.522901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.522959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.523177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.523236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.523422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.301 [2024-12-10 04:14:05.523484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.301 qpair failed and we were unable to recover it. 00:26:11.301 [2024-12-10 04:14:05.523780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.523841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.524064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.524122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.524340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.524402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.524682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.524743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.524965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.525024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.525288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.525347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.525613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.525674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.525901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.525961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.526152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.526209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.526481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.526539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.526726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.526785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.527050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.527109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.527328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.527386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.527630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.527699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.527898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.527957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.528140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.528199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.528444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.528502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.528696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.528757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.529020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.529080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.529343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.529402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.529658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.529719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.529899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.529959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.530137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.530197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.530418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.530477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.530666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.530726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.530967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.531025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.531262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.531321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.531541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.531622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.531865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.531924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.532159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.532219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.532487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.532560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.532764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.532822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.533046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.533104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.533374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.302 [2024-12-10 04:14:05.533435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.302 qpair failed and we were unable to recover it. 00:26:11.302 [2024-12-10 04:14:05.533634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.533694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.533932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.533993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.534192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.534253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.534429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.534487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.534744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.534803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.535081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.535143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.535425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.535486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.535700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.535762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.535991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.536065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.536292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.536353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.536589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.536662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.536862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.536931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.537163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.537223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.537463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.537522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.537769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.537828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.538084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.538144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.538373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.538436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.538667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.538730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.538936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.539002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.539226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.539296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.539586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.539648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.539839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.539901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.540132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.540191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.540372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.540433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.540674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.540735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.540969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.541031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.541248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.541310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.541532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.541616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.541809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.541868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.542096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.542154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.542398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.542456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.542776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.542839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.543074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.543135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.543355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.543416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.543640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.543704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.543937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.543999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.544191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.544250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.303 [2024-12-10 04:14:05.544482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.303 [2024-12-10 04:14:05.544542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.303 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.544838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.544897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.545140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.545199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.545427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.545490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.545708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.545771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.545979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.546039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.546266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.546325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.546594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.546655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.546836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.546894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.547153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.547218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.547514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.547591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.547795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.547854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.548104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.548175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.548382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.548442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.548699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.548760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.549039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.549097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.549266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.549326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.549599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.549664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.549857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.549927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.550199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.550258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.550481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.550541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.550745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.550804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.551012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.551081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.551344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.551416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.551664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.551726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.551976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.552036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.552268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.552327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.552592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.552653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.552876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.552935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.553195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.553267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.553470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.553529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.553786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.553851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.554064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.554123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.554375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.554436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.554645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.554708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.554951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.555013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.555243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.555303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.555526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.555601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.555844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.555904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.304 qpair failed and we were unable to recover it. 00:26:11.304 [2024-12-10 04:14:05.556146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.304 [2024-12-10 04:14:05.556205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.556463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.556522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.556748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.556812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.557072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.557133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.557367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.557428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.557647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.557707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.557949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.558008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.558271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.558330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.558628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.558690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.558946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.559009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.559216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.559277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.559508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.559581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.559811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.559869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.560041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.560104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.560315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.560377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.560644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.560705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.560887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.560947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.561172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.561233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.561452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.561511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.561790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.561857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.562094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.562162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.562410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.562470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.562779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.562840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.563112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.563181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.563374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.563433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.563628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.563700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.563900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.563958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.564227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.564288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.564487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.564560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.564761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.564820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.565057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.565115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.565396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.565455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.565683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.565745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.565981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.566049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.566277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.566337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.566561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.566621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.566804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.566864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.567050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.567109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.567335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.567397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.305 qpair failed and we were unable to recover it. 00:26:11.305 [2024-12-10 04:14:05.567616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.305 [2024-12-10 04:14:05.567677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.567920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.567981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.568197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.568268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.568458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.568519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.568772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.568832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.569010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.569071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.569337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.569397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.569592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.569653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.569881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.569949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.570163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.570223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.570453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.570513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.570816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.570887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.571099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.571160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.571413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.571472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.571730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.571790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.571983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.572041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.572225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.572287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.572525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.572606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.572875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.572944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.573176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.573236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.573457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.573517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.573773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.573832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.574015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.574077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.574303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.574365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.574564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.574636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.574821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.574881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.575052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.575110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.575327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.575386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.575614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.575675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.575857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.575916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.576116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.576183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.576422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.576481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.576723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.576785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.576959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.577019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.577222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.577281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.577469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.577527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.577766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.577841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.578035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.578103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.578295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.306 [2024-12-10 04:14:05.578355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.306 qpair failed and we were unable to recover it. 00:26:11.306 [2024-12-10 04:14:05.578573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.578634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.578869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.578930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.579202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.579262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.579498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.579575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.579850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.579909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.580148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.580209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.580440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.580501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.580769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.580860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.581106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.581169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.581361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.581425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.581663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.581726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.582000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.582060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.582295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.582355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.582634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.582696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.582962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.583022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.583250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.583309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.583568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.583629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.583894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.583955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.584193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.584253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.584528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.584604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.584838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.584901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.585077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.585138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.585359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.585419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.585638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.585700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.585932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.585994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.586204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.586275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.586559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.586620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.586844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.586904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.587172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.587230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.587418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.587480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.587675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.587739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.587957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.588021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.588272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.307 [2024-12-10 04:14:05.588338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.307 qpair failed and we were unable to recover it. 00:26:11.307 [2024-12-10 04:14:05.588652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.588713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.588986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.589046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.589399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.589464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.589783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.589844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.590118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.590184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.590394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.590459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.590757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.590819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.591094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.591171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.591463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.591527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.591824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.591906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.592177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.592242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.592502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.592580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.592847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.592907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.593169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.593234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.593465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.593525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.593815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.593902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.594143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.594211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.594459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.594524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.594771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.594832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.595166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.595233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.595496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.595597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.595896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.595962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.596268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.596334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.596596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.596687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.596981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.597045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.597233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.597298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.597558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.597625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.597920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.597984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.598254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.598319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.598617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.598684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.598973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.599037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.599331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.599396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.599645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.599722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.600014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.600079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.600271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.600336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.600627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.308 [2024-12-10 04:14:05.600693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.308 qpair failed and we were unable to recover it. 00:26:11.308 [2024-12-10 04:14:05.600982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.601047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.601262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.601328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.601606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.601671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.601919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.601983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.602272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.602338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.602611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.602677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.602930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.602994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.603295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.603362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.603566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.603634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.603881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.603947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.604173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.604241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.604453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.604518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.604839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.604904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.605110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.605174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.605456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.605521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.605827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.605899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.606154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.606218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.606431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.606496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.606805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.606906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.607138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.607206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.607504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.607592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.607870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.607937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.608184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.608248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.608499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.608605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.608908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.608977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.609216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.609281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.609490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.609568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.609832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.609899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.610144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.610209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.610456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.610521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.610793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.610858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.611081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.611149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.611392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.611457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.611756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.611823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.612077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.612141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.612403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.612469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.612742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.612809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.309 [2024-12-10 04:14:05.613112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.309 [2024-12-10 04:14:05.613177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.309 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.613419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.613483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.613740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.613806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.614107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.614171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.614460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.614524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.614791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.614855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.615110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.615175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.615375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.615444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.615698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.615765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.616017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.616081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.616372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.616437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.616704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.616772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.617005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.617069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.617371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.617442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.617701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.617768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.618002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.618065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.618267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.618334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.618633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.618701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.618946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.619010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.619307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.619371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.619662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.619729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.619990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.620056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.620302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.620369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.620661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.620729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.620988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.621055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.621274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.621341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.621593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.621679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.621896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.621963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.622221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.622285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.622576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.622643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.622883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.622948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.623255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.623319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.623527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.623616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.623866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.623932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.624149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.624213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.624457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.624522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.624849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.624914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.625160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.625225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.625510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.625593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.310 qpair failed and we were unable to recover it. 00:26:11.310 [2024-12-10 04:14:05.625837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.310 [2024-12-10 04:14:05.625902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.626159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.626225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.626467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.626534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.626852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.626917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.627163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.627228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.627465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.627531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.627824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.627894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.628189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.628254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.628506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.628591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.628886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.628951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.629194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.629259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.629540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.629622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.629860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.629926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.630214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.630280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.630538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.630639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.630875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.630941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.631190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.631255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.631501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.631589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.631883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.631948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.632192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.632256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.632501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.632589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.632832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.632897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.633162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.633225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.633467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.633531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.633771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.633836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.634084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.634148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.634386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.634451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.634663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.634739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.634984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.635048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.635333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.635399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.635645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.635712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.635968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.636034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.636311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.636375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.636598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.636664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.311 qpair failed and we were unable to recover it. 00:26:11.311 [2024-12-10 04:14:05.636884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.311 [2024-12-10 04:14:05.636949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.637188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.637253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.637564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.637629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.637853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.637917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.638159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.638223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.638505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.638582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.638792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.638855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.639106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.639170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.639467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.639532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.639799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.639862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.640150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.640215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.640427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.640493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.640756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.640821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.641104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.641168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.641427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.641492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.641717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.641784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.642033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.642098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.642353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.642418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.642670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.642737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.643035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.643100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.643314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.643381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.643628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.643695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.643945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.644010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.644309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.644373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.644633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.644700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.644956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.645020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.645270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.645334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.312 [2024-12-10 04:14:05.645584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.312 [2024-12-10 04:14:05.645650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.312 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.645896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.645960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.646207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.646271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.646458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.646525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.646759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.646824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.647111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.647175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.647472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.647579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.647827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.647892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.648181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.648246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.648533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.648617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.648861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.648925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.649215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.649280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.649577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.649643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.649936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.650001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.650237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.650303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.650513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.650592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.650807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.650872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.651116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.651182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.651459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.651523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.651766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.651833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.652142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.652206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.652501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.652586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.652885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.652951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.653150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.653214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.653448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.653513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.653732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.653797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.654089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.654153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.654435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.654499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.654808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.654874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.655167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.655232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.655488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.655569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.655815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.592 [2024-12-10 04:14:05.655883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.592 qpair failed and we were unable to recover it. 00:26:11.592 [2024-12-10 04:14:05.656173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.656237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.656540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.656624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.656924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.656989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.657286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.657351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.657597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.657664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.657884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.657948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.658201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.658267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.658468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.658533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.658800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.658865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.659103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.659168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.659421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.659486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.659796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.659862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.660160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.660226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.660516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.660597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.660881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.660957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.661150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.661215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.661444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.661508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.661815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.661881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.662134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.662199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.662398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.662464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.662744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.662810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.663108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.663172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.663472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.663536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.663812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.663876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.664119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.664182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.664444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.664508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.664716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.664781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.665023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.665086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.665398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.665463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.665761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.665827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.666030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.666095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.666348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.666413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.666659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.666726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.666929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.666994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.667235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.667301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.667604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.667670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.667969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.668034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.668229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.668293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.593 [2024-12-10 04:14:05.668587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.593 [2024-12-10 04:14:05.668654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.593 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.668940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.669005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.669252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.669315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.669574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.669643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.669942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.670007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.670310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.670373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.670638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.670704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.671004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.671069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.671361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.671425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.671714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.671780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.672024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.672092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.672376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.672441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.672691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.672756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.672989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.673054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.673300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.673364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.673653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.673719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.674025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.674099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.674355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.674422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.674723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.674790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.675046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.675109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.675352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.675419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.675708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.675774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.676060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.676123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.676406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.676470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.676764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.676831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.677072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.677136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.677375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.677439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.677641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.677708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.677959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.678022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.678306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.678370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.678655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.678722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.678974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.679039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.679284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.679348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.679595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.679661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.679907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.679971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.680165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.680231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.680471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.680536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.680797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.680864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.681161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.594 [2024-12-10 04:14:05.681226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.594 qpair failed and we were unable to recover it. 00:26:11.594 [2024-12-10 04:14:05.681465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.681530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.681804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.681867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.682107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.682171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.682352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.682418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.682689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.682755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.683014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.683078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.683324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.683389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.683680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.683746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.683996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.684063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.684325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.684390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.684684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.684752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.685032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.685096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.685295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.685359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.685612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.685679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.685930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.685994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.686276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.686341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.686637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.686704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.686999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.687081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.687372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.687437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.687700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.687765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.688053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.688118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.688363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.688428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.688731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.688796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.689044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.689107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.689349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.689414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.689677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.689742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.689992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.690057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.690301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.690368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.690651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.690717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.690936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.691000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.691280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.691345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.691600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.691667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.691912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.691978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.692222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.692287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.692494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.692575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.692867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.692932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.693121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.693182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.693467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.595 [2024-12-10 04:14:05.693533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.595 qpair failed and we were unable to recover it. 00:26:11.595 [2024-12-10 04:14:05.693805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.693871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.694121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.694185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.694475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.694541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.694817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.694886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.695138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.695202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.695442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.695507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.695780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.695847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.696095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.696161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.696416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.696480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.696750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.696816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.697065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.697130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.697385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.697451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.697678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.697745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.698031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.698096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.698343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.698406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.698620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.698688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.698983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.699048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.699345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.699409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.699691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.699757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.699970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.700045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.700348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.700413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.700724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.700793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.596 [2024-12-10 04:14:05.701007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.596 [2024-12-10 04:14:05.701072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.596 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.701357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.701421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.701706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.701772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.702057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.702121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.702335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.702399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.702640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.702707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.702942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.703006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.703256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.703323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.703537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.703624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.703913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.703977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.704215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.704279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.704592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.704658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.704952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.705015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.705228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.705294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.705535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.705617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.705909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.705974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.706222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.706286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.706560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.706627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.706835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.706907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.707191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.707255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.707514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.707621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.707879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.707949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.708196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.708261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.708506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.708589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.708864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.708932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.709179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.709243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.709486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.709567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.709818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.709888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.710140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.710204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.710483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.710566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.710707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.710743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.710884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.710919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.711022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.711058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.711203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.711238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.711360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.711395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.711570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.711621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.711763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.597 [2024-12-10 04:14:05.711797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.597 qpair failed and we were unable to recover it. 00:26:11.597 [2024-12-10 04:14:05.711918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.711957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.712168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.712232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.712458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.712524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.712696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.712730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.712850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.712883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.712995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.713029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.713176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.713209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.713321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.713355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.713472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.713506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.713676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.713711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.713827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.713861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.714005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.714038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.714252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.714324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.714555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.714589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.714746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.714781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.714990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.715052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.715220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.715253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.715471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.715535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.715722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.715756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.715924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.715958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.716124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.716193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.716443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.716507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.716687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.716721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.716860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.716893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.717008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.717041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.717181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.717251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.717495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.717530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.717685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.717720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.717888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.717922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.718170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.718222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.718481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.718515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.718700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.718734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.718918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.718953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.719096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.719130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.719394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.719458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.719668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.719702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.719837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.598 [2024-12-10 04:14:05.719870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.598 qpair failed and we were unable to recover it. 00:26:11.598 [2024-12-10 04:14:05.720190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.720255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.720542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.720629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.720755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.720789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.720933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.721008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.721297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.721360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.721608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.721644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.721760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.721793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.721933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.721966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.722127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.722192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.722433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.722467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.722604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.722639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.722748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.722782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.722991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.723056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.723330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.723380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.723574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.723611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.723748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.723782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.723966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.723999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.724169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.724204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.724485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.724564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.724700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.724734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.724906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.724970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.725215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.725281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.725514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.725565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.725738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.725771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.725912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.725946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.726148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.726214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.726454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.726489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.726602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.726637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.726842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.726878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.727019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.727052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.727292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.727326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.727456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.727490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.727638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.727672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.727812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.727845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.728032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.728066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.599 qpair failed and we were unable to recover it. 00:26:11.599 [2024-12-10 04:14:05.728210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.599 [2024-12-10 04:14:05.728244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.728348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.728381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.728621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.728687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.728930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.728995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.729274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.729339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.729623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.729690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.729985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.730019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.730152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.730186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.730450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.730488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.730665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.730700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.730848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.730902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.731194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.731258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.731535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.731576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.731690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.731725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.731921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.731986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.732269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.732303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.732438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.732472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.732574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.732609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.732719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.732754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.732947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.732981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.733220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.733299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.733596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.733645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.733820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.733869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.734121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.734186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.734439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.734505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.734713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.734762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.734971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.735037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.735295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.735361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.735615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.735680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.735935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.735983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.736214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.736248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.736444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.736509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.736786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.736851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.737038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.737103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.737284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.737347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.737634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.737711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.600 [2024-12-10 04:14:05.737973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.600 [2024-12-10 04:14:05.738007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.600 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.738177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.738228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.738443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.738508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.738753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.738787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.738933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.738966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.739240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.739288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.739500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.739562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.739705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.739739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.739926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.739991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.740130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.740164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.740345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.740426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.740687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.740757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.740931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.740965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.741094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.741128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.741326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.741360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.741497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.741531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.741740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.741805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.742059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.742092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.742293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.742358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.742543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.742628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.742875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.742940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.743151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.743198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.743383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.743464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.743738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.743772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.743943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.744015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.744244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.744292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.744516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.744617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.744881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.744914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.745020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.745056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.745220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.745254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.745476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.745541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.745840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.745905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.746203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.746268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.746511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.601 [2024-12-10 04:14:05.746596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.601 qpair failed and we were unable to recover it. 00:26:11.601 [2024-12-10 04:14:05.746790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.746855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.747102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.747167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.747414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.747479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.747739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.747805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.748045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.748109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.748317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.748393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.748640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.748707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.748944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.748978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.749146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.749179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.749447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.749512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.749768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.749802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.749935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.749968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.750153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.750218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.750433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.750499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.750799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.750863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.751117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.751181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.751462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.751495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.751671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.751706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.751919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.751954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.752071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.752105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.752337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.752403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.752668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.752736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.753008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.753042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.753169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.753203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.753307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.753341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.753488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.753521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.753765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.753832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.754087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.754121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.754260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.754295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.754560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.754622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.754873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.754951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.755243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.755321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.755633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.755712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.755967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.756043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.756310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.602 [2024-12-10 04:14:05.756371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.602 qpair failed and we were unable to recover it. 00:26:11.602 [2024-12-10 04:14:05.756669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.756747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.757037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.757114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.757341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.757400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.757698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.757777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.757985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.758062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.758290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.758350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.758588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.758649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.758900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.758976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.759229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.759263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.759393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.759426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.759631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.759719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.760032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.760109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.760341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.760400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.760620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.760700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.760995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.761029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.761172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.761206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.761315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.761347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.761485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.761517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.761756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.761834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.762097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.762173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.762406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.762440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.762608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.762678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.762906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.762984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.763277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.763356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.763653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.763733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.764031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.764107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.764377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.764437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.764688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.764766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.765067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.765101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.765270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.765304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.765491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.765580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.765826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.765905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.766200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.766278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.766554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.766589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.766724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.766758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.766980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.767057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.767298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.767331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.767478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.767529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.603 [2024-12-10 04:14:05.767849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.603 [2024-12-10 04:14:05.767926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.603 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.768195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.768272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.768523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.768565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.768679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.768712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.768961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.768995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.769135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.769169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.769409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.769469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.769786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.769865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.770122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.770198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.770476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.770536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.770822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.770857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.771001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.771035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.771177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.771216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.771434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.771494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.771811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2505915 Killed "${NVMF_APP[@]}" "$@" 00:26:11.604 [2024-12-10 04:14:05.771890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.772189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.772266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.772512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.772590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 04:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.772850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.772935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 04:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:11.604 [2024-12-10 04:14:05.773235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.773311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 04:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:11.604 [2024-12-10 04:14:05.773538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.773583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 04:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:11.604 [2024-12-10 04:14:05.773704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.773738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 04:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:11.604 [2024-12-10 04:14:05.773845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.773878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.774016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.774051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.774164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.774197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.774337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.774370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.774537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.774582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.774696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.774730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.774909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.774970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.775218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.775252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.775407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.775460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.775791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.775870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.776114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.776193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.776440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.776474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.776615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.776650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.776760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.776796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.604 [2024-12-10 04:14:05.776935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.604 [2024-12-10 04:14:05.776970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.604 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.777106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.777145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.777255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.777289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.777466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.777525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.777677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.777712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.777837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.777871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.777986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.778019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.778234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.778293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.778506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.778540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.778701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.778734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.778923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.778982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.779097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.779130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 04:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2506466 00:26:11.605 [2024-12-10 04:14:05.779265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.779300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b9 04:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:11.605 0 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 04:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2506466 00:26:11.605 [2024-12-10 04:14:05.779474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.779575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 04:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2506466 ']' 00:26:11.605 [2024-12-10 04:14:05.779744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.779779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 04:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.605 04:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.605 [2024-12-10 04:14:05.780006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 04:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.605 [2024-12-10 04:14:05.780084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 04:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.605 04:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:11.605 [2024-12-10 04:14:05.780361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.780420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.780678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.780713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.780829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.780891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.781110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.781186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.781352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.781412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.781665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.781698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.781811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.781863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.782141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.782229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.782450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.782518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.782689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.782723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.782870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.782904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.783127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.783187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.783391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.783427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-10 04:14:05.783538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-10 04:14:05.783580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.783697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.783732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.783922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.783983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.784268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.784327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.784522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.784611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.784754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.784787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.784898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.784932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.785071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.785105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.785298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.785362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.785503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.785537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.785694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.785728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.785899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.785962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.786201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.786261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.786447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.786508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.786706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.786741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.786920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.786982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.787200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.787280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.787513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.787556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.787701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.787735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.787851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.787887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.788024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.788059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.788291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.788352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.788534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.788620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.788756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.788790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.788986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.789046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.789288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.789349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.789539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.789626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.789743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.789778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.789947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.789981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.790113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.790146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.790324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.790357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.790467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.790500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.790692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-10 04:14:05.790744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-10 04:14:05.790900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.790962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.791194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.791275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.791528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.791617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.791735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.791768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.791878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.791911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.792038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.792072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.792211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.792244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.792402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.792461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.792711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.792763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.792876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.792913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.793032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.793067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.793205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.793238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.793377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.793411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.793619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.793653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.793803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.793839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.793963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.793997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.794104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.794137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.794321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.794382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.794574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.794627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.794745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.794778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.794940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.794999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.795236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.795294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.795573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.795625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.795737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.795771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.795948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.795981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.796087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.796120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.796296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.796328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.796601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.796636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.796743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.796781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.797003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.797036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.797142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.797175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.797361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.797419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.797623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.797662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.797777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.797812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.797923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.797956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.798068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.798102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.798240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.798275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.798384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.798418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.798620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-10 04:14:05.798655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-10 04:14:05.798787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.798823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.798964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.798998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.799196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.799258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.799509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.799595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.799750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.799783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.799900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.799932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.800039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.800072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.800182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.800215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.800389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.800424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.800576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.800611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.800708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.800742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.800849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.800883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.800991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.801025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.801154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.801188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.801325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.801359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.801566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.801624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.801786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.801837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.801978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.802016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.802170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.802205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.802340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.802382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.802525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.802587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.802699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.802735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.802879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.802913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.803085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.803139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.803332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.803365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.803511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.803563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.803694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.803729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.803874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.803908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.804024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.804080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.804264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.804319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.804472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.804506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.804750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.804809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.805000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.805056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.805265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.805320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.805495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.805561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.805723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.805777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.805943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.805998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.806184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.806241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-10 04:14:05.806408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-10 04:14:05.806462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.806651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.806707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.806893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.806955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.807175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.807232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.807406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.807462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.807695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.807754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.807977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.808031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.808195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.808274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.808512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.808604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.808817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.808870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.809032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.809086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.809290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.809323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.809469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.809502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.809616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.809650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.809769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.809803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.809990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.810023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.810128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.810161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.810267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.810299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.810419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.810481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.810745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.810779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.810897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.810929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.811126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.811180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.811363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.811396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.811510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.811542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.811737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.811790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.811996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.812029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.812172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.812205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.812335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.812368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.812506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.812539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.812666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.812699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.812869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.812923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.813074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.813128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.813244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.813284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.813398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.813431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.813537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.813584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.813705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.813767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.814007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.814060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.814281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-10 04:14:05.814336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-10 04:14:05.814506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.814567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.814680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.814713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.814884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.814938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.815187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.815241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.815416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.815479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.815589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.815624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.815779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.815833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.816045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.816099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.816322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.816377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.816625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.816681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.816861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.816915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.817124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.817178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.817351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.817405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.817579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.817635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.817843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.817897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.818119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.818182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.818433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.818497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.818751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.818806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.818993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.819048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.819188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.819221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.819326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.819358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.819462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.819501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.819680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.819732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.819902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.819955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.820138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.820191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.820442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.820497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.820723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.820758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.820899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.820932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.821134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.821189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.821300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.821333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.821474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.821508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.821685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.821741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.821898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.821953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.822185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.822218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.822329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-10 04:14:05.822362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-10 04:14:05.822513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.822554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.822672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.822705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.822819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.822852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.822990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.823023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.823212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.823266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.823514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.823578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.823692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.823725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.823839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.823873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.824068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.824123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.824309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.824362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.824575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.824631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.824795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.824850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.825057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.825111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.825269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.825341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.825482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.825515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.825725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.825758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.825856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.825889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.826033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.826092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.826237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.826300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.826471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.826525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.826738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.826771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.826915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.826947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.827151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.827184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.827320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.827353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.827479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.827533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.827721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.827774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.827971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.828003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.828176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.828210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.828353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.828408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.828577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.828633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.828812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.828866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.829074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.829107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.829249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.829282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.829518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-10 04:14:05.829610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-10 04:14:05.829804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.829837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.829984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.830017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.830030] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:11.612 [2024-12-10 04:14:05.830107] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.612 [2024-12-10 04:14:05.830157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.830189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.830346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.830398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.830648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.830700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.830911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.830948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.831091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.831124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.831285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.831336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.831492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.831542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.831732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.831783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.832025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.832059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.832172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.832205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.832411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.832463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.832695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.832747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.832975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.833008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.833116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.833149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.833254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.833287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.833424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.833457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.833712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.833746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.833871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.833905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.834075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.834108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.834218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.834251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.834411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.834462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.834642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.834694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.834906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.834956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.835153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.835204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.835405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.835458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.835665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.835730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.835977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.836040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.836265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.836316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.836554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.836587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.836703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.836736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.836969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.837009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.837115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.837148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.837258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.837291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.837486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.837519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.837705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.837738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.837950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-10 04:14:05.837983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-10 04:14:05.838127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.838160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.838356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.838407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.838631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.838696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.838910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.838989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.839189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.839240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.839435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.839485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.839725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.839789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.840014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.840077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.840347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.840425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.840593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.840679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.840987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.841056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.841288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.841342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.841564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.841600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.841868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.841936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.842147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.842213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.842472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.842531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.842740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.842822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.843119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.843183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.843452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.843488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.843613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.843648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.843862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.843928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.844168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.844261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.844542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.844616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.844883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.844983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.845175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.845244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.845432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.845485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.845782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.845853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.846110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.846178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.846444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.846497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.846735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.846792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.846983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.847049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.847228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.847313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.847518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.847623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.847916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.847983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.848301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.848356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.848578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.848635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.848841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.848894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.849054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-10 04:14:05.849107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-10 04:14:05.849315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.849370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.849539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.849612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.849817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.849876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.850117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.850173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.850331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.850383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.850535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.850602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.850844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.850899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.851097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.851149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.851381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.851440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.851661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.851726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.851989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.852074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.852321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.852388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.852607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.852669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.852879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.852935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.853128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.853184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.853403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.853458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.853661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.853723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.853907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.853965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.854188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.854247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.854444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.854502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.854775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.854846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.855141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.855199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.855374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.855431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.855644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.855713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.856023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.856081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.856297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.856354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.856621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.856682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.856907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.856963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.857180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.857237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.857472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.857532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.857742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.857799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.858017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.858073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.858338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.858397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.858623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.858681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.858932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.858989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.859256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.859314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-10 04:14:05.859525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-10 04:14:05.859598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.859787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.859862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.860054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.860113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.860319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.860374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.860609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.860670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.860937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.860996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.861193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.861254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.861468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.861531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.861766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.861825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.862085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.862144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.862396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.862452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.862728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.862804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.863051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.863109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.863299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.863356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.863626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.863690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.863959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.864015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.864190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.864245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.864479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.864537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.864753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.864813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.865035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.865091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.865306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.865364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.865607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.865667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.865893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.865949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.866231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.866294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.866534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.866614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.866812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.866888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.867115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.867178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.867408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.867475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.867747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.867819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.867981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.868007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.868097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.868122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.868217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.868242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.868368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.868394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.868514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-10 04:14:05.868538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-10 04:14:05.868675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.868702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.868809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.868833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.868947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.868978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.869075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.869101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.869189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.869220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.869342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.869369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.869462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.869486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.869610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.869638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.869742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.869768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.869888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.869915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.870003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.870027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.870139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.870167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.870260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.870284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.870383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.870421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.870514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.870554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.870649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.870674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.870783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.870809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.870929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.870956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.871093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.871119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.871204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.871228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.871349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.871388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.871477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.871503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.871603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.871629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.871722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.871747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.871843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.871868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.871978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.872003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.872084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.872107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.872194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.872217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.872328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.872352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.872427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.872454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.872561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.872598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.872727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.872756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.872852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.872879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.872965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.872992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.873088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.873116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.873191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.873217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.873343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.873382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.873509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-10 04:14:05.873536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-10 04:14:05.873665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.873692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.873810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.873837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.873918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.873943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.874018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.874043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.874153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.874179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.874268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.874292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.874368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.874393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.874476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.874501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.874586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.874610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.874706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.874734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.874854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.874880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.874972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.874998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.875083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.875109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.875196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.875223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.875316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.875349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.875440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.875466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.875567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.875592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.875683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.875707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.875789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.875813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.875920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.875945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.876055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.876081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.876155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.876179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.876267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.876295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.876387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.876414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.876510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.876537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.876641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.876666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.876752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.876778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.876933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.876960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.877103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.877129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.877213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.877244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.877336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.877362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.877477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.877503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.877597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.877624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.877709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.877734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.877823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.877847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.877961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.877990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.878101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.878127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-10 04:14:05.878236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-10 04:14:05.878261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.878344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.878368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.878461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.878488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.878582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.878608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.878704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.878743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.878827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.878853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.878968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.878994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.879089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.879114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.879221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.879247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.879356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.879382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.879464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.879490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.879614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.879642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.879737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.879765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.879850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.879874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.879952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.879977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.880065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.880089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.880179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.880203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.880285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.880309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.880392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.880416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.880533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.880572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.880696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.880721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.880801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.880826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.880913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.880943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.881063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.881091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.881231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.881257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.881344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.881369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.881482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.881510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.881607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.881631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.881727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.881753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.881849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.881873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.882015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.882040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.882132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.882163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.882251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.882277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.882416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.882443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.882598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.882625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.882717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.882742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.882843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.882869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.882985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.883011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-10 04:14:05.883084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-10 04:14:05.883109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.883232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.883262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.883394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.883433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.883539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.883571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.883667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.883693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.883801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.883827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.883917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.883941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.884057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.884082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.884168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.884193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.884310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.884336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.884437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.884462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.884563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.884589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.884679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.884703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.884787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.884811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.884918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.884943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.885028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.885053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.885146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.885173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.885256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.885282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.885410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.885450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.885564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.885593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.885711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.885736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.885854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.885880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.885977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.886003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.886104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.886145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.886242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.886271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.886476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.886504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.886620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.886646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.886732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.886757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.886883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.886911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.887004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.887028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.887106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.887131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.887251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.887277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.887375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.887414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.887542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.887583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.887671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.887697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.887813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.887839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.887951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.887977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.619 qpair failed and we were unable to recover it. 00:26:11.619 [2024-12-10 04:14:05.888087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.619 [2024-12-10 04:14:05.888114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.888240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.888267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.888374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.888414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.888530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.888567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.888667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.888706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.888799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.888825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.888912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.888938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.889056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.889082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.889173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.889200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.889334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.889373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.889463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.889490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.889611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.889638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.889732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.889757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.889849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.889875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.889956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.889982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.890090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.890116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.890203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.890230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.890339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.890366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.890465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.890505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.890612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.890641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.890737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.890762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.890883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.890909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.890990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.891015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.891114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.891142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.891234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.891261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.891339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.891363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.891447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.891472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.891588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.891613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.891727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.891753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.891835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.891859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.891978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.892003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.892114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.892144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.892255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.620 [2024-12-10 04:14:05.892283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.620 qpair failed and we were unable to recover it. 00:26:11.620 [2024-12-10 04:14:05.892399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.892428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.892562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.892602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.892739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.892766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.892851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.892875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.892996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.893021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.893135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.893160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.893298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.893324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.893419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.893446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.893529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.893564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.893655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.893682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.893798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.893824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.893939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.893965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.894055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.894080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.894190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.894215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.894326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.894352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.894435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.894460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.894576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.894603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.894692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.894717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.894796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.894820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.894960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.894986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.895076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.895101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.895192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.895217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.895299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.895323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.895400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.895424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.895507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.895539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.895660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.895692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.895791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.895817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.895906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.895933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.896028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.896053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.896140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.896166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.896284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.896310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.896420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.896445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.896534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.896572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.896669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.896695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.896788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.896812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.896901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.896926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.621 qpair failed and we were unable to recover it. 00:26:11.621 [2024-12-10 04:14:05.897016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.621 [2024-12-10 04:14:05.897041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.897157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.897183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.897290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.897316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.897407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.897431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.897510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.897534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.897632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.897657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.897745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.897772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.897856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.897881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.897972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.897996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.898102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.898133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.898222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.898247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.898332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.898356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.898447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.898473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.898571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.898596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.898688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.898712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.898828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.898852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.898973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.898999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.899084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.899109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.899224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.899252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.899395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.899421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.899536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.899567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.899675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.899700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.899787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.899813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.899902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.899928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.900045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.900073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.900207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.900246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.900451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.900485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.900604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.900630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.900743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.900769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.900908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.900939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.901029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.901053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.901138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.901164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.901282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.901307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.901441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.901467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.901582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.901608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.901688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.901712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.901797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.901821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.901904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.901930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.902015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.622 [2024-12-10 04:14:05.902040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.622 qpair failed and we were unable to recover it. 00:26:11.622 [2024-12-10 04:14:05.902123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.902148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.902233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.902259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.902336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.902362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.902483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.902512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.902677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.902704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.902840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.902866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.902970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.902996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.903082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.903108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.903199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.903224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.903304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.903329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.903445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.903470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.903583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.903610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.903695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.903720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.903794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.903822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.903912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.903937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.904053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.904081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.904172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.904202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.904299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.904336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.904422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.904449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.904536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.904572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.904659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.904685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.904806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.904833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.904928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.904953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.905059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.905085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.905200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.905225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.905315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.905342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.905428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.905454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.905559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.905588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.905675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.905701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.905790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.905820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.905934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.905960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.906053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.906080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.906168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.906194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.906307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.906333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.906453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.906480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.906573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.906599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.906680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.906705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.623 [2024-12-10 04:14:05.906794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.623 [2024-12-10 04:14:05.906820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.623 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.906908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.906934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.907049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.907078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.907199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.907234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.907356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.907383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.907499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.907526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.907648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.907675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.907788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.907819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.907906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.907932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.908074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.908100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.908239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.908265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.908350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.908379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.908515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.908541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.908667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.908693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.908781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.908806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.908909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.908934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.909023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.909048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.909132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.909157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.909265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.909291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.909365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.909389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.909472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.909503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.909627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.909656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.909775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.909805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.909909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.909941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.910057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.910084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.910170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.910202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.910307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.910333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.910413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.910439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.910556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.910582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.910670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.910696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.910782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.910807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.910920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.910945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.911059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.911085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.911163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.911189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.911299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.624 [2024-12-10 04:14:05.911340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.624 qpair failed and we were unable to recover it. 00:26:11.624 [2024-12-10 04:14:05.911439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.911468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.911622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.911662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.911810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.911838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.911930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.911959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.912074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.912101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.912193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.912221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.912308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.912334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.912428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.912459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.912582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.912610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.912720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.912747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.912841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.912867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.913012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.913039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.913148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.913174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.913292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.913319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.913412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.913437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.913562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.913588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.913703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.913728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.913840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.913866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.913978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.914004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.914098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.914125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.914211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.914237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.914374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.914401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.914479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.914503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.914622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.914651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.914747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.914774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.914897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.914924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.915008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.915034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.915115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.915140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.915254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.915279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.915362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.915387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.915467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.915493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.915636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.915662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.915750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.915776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.915888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.915914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.915997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.916023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.625 qpair failed and we were unable to recover it. 00:26:11.625 [2024-12-10 04:14:05.916103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.625 [2024-12-10 04:14:05.916128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.916282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.916323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.916430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.916458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.916552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.916580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.916671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.916698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.916824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.916853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.916968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.916995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.917088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.917116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.917202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.917227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.917306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.917330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.917447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.917473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.917591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.917617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.917737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.917762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.917845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.917870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.917947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.917971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.918058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.918084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.918160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.918184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.918323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.918348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.918435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.918475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.918679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.918714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.918839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.918865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.918957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.918982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.919124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.919153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.919268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.919293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.919378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.919405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.919497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.919523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.919629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.919662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.919761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.919789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.919909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.919935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.920028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.920056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.920146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.920173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.920262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.920296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.920379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.920406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.920501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.920527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.920641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.920667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.920757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.920783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.920899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.920925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.921006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.921031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.921137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.626 [2024-12-10 04:14:05.921165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.626 qpair failed and we were unable to recover it. 00:26:11.626 [2024-12-10 04:14:05.921248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.921274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.921404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.921443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.921524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.921563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.921657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.921684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.921792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.921819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.921902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.921929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.922032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.922060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.922190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.922219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.922299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.922324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.922399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.922426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.922518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.922552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.922648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.922675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.922766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.922791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.922886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.922921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.923035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.923061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.923153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.923181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.923269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.923295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.923405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.923430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.923540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.923571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.923657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.923687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.923768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.923793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.923901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.923926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.924042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.924067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.924142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.924167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.924243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.924268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.924379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.924404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.924480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:11.627 [2024-12-10 04:14:05.924497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.924535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.924664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.924692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.924781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.924808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.924892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.924918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.924996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.925022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.925131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.925157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.925246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.925277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.925353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.925378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.925484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.925509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.925610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.925637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.925753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.925778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.925891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.925917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.627 [2024-12-10 04:14:05.925998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.627 [2024-12-10 04:14:05.926024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.627 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.926104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.926129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.926216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.926241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.926324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.926351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.926438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.926464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.926558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.926585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.926670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.926696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.926783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.926810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.926895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.926921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.927004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.927033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.927126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.927151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.927237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.927263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.927351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.927376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.927460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.927485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.927574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.927600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.927708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.927734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.927839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.927865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.927948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.927973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.928058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.928084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.928193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.928219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.928305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.928330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.928440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.928473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.928558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.928585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.928676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.928702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.928800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.928829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.929025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.929054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.929168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.929194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.929282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.929315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.929418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.929444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.929523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.929559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.929645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.929670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.929758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.929786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.929876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.929901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.929982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.930009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.930124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.930149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.930277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.930307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.930400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.930428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.930538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.930588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.930685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.930714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.628 [2024-12-10 04:14:05.930838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.628 [2024-12-10 04:14:05.930865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.628 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.930948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.930975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.931057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.931082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.931171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.931200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.931289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.931316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.931402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.931429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.931558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.931584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.931674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.931700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.931814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.931840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.931923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.931956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.932045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.932071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.932170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.932196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.932314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.932342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.932430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.932463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.932572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.932600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.932688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.932714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.932809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.932835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.932949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.932975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.933091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.933116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.933202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.933229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.933306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.933332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.933414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.933442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.933553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.933580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.933705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.933730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.933821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.933847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.933960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.933985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.934109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.934145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.934238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.934265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.934354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.934380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.934471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.934498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.934589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.934617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.934707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.934733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.934839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.934865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.934977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.629 [2024-12-10 04:14:05.935003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.629 qpair failed and we were unable to recover it. 00:26:11.629 [2024-12-10 04:14:05.935095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.935121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.935203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.935229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.935347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.935376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.935466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.935494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.935607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.935634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.935717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.935744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.935839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.935865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.935968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.935993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.936085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.936114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.936204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.936233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.936324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.936349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.936449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.936476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.936568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.936595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.936676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.936703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.936798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.936831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.936924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.936956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.937046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.937073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.937169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.937195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.937312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.937339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.937451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.937477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.937573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.937601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.937685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.937711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.937797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.937823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.937911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.937937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.938023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.938049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.938139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.938164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.938247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.938275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.938366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.938395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.938494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.938519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.938622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.938657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.938748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.938774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.938865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.938892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.939009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.939036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.939121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.939147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.939239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.939264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.939374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.939400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.939487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.939513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.939636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.939662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.630 [2024-12-10 04:14:05.939760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.630 [2024-12-10 04:14:05.939789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.630 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.939880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.939907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.939987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.940016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.940124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.940150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.940267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.940297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.940379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.940405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.940483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.940509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.940602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.940628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.940720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.940745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.940853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.940879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.940995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.941021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.941127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.941153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.941243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.941269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.941349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.941374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.941457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.941483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.941572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.941598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.941675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.941700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.941816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.941841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.941958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.941984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.942063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.942088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.942190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.942231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.942330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.942359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.942445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.942471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.942556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.942585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.942687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.942713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.942823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.942849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.942958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.942984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.943068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.943094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.943180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.943206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.943321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.943346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.943432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.943458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.943561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.943607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.943699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.943728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.943809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.943837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.943953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.943984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.944117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.944144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.944249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.944279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.944368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.944395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.944509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.944535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.944652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.631 [2024-12-10 04:14:05.944678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.631 qpair failed and we were unable to recover it. 00:26:11.631 [2024-12-10 04:14:05.944759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.944784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.944868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.944894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.944976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.945004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.945120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.945146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.945229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.945255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.945369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.945394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.945480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.945507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.945613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.945653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.945771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.945801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.945914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.945940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.946112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.946138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.946250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.946276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.946390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.946417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.946508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.946535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.946642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.946671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.946756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.946783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.946899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.946926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.947038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.947064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.947171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.947215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.947366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.947395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.947509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.947537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.947635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.947662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.947747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.947773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.947909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.947936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.948018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.948046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.948166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.948195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.948282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.948309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.948387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.948413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.948527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.948563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.948646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.948672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.948781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.948807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.948924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.948951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.949047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.949074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.949190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.949225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.949364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.949392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.949511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.949539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.949635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.949662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.949785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.949823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.632 [2024-12-10 04:14:05.949943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.632 [2024-12-10 04:14:05.949970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.632 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.950057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.950085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.950177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.950202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.950316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.950342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.950455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.950480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.950560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.950588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.950669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.950694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.950778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.950809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.950951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.950978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.951090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.951116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.951236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.951262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.951353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.951380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.951521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.951552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.951638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.951664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.951755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.951781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.951930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.951957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.952078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.952107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.952237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.952277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.952391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.952430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.952559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.952587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.952677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.952709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.952798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.952825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.952932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.952959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.953051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.953078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.953191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.953216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.953323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.953349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.953429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.953455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.633 [2024-12-10 04:14:05.953554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.633 [2024-12-10 04:14:05.953581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.633 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.953666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.953693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.953779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.953806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.953943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.953970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.954063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.954091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.954177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.954204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.954327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.954366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.954491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.954533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.954768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.954807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.954927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.954955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.955098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.955125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.955241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.955266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.955350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.955377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.955464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.955490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.955585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.955611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.955724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.955750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.955860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.955886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.955968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.955993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.956105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.956132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.956219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.956246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.956339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.956372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.956493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.956522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.956631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.956671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.956796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.956824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.956909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.956935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.957020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.957048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.957159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.957186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-10 04:14:05.957299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-10 04:14:05.957328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.957405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.957432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.957561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.957590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.957706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.957732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.957821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.957846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.957959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.957984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.958097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.958123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.958211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.958237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.958380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.958407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.958498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.958526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.958681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.958708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.958791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.958817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.958903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.958929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.959009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.959035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.959150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.959176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.959258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.959285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.959396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.959422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.959508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.959537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.959664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.959690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.959767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.959792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.959884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.959909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.960003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.960043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.960130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.960157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.960272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.960300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.960391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.960418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.960559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.960586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.960674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.960700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.960783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.960809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.960896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.960923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.961010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.961037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.961150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.961177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.961309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.961348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.961445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.961474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.961589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.961621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.961733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.961760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.961844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.961871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.961951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.961978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.962096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-10 04:14:05.962123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-10 04:14:05.962243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.962272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.962362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.962389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.962471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.962497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.962584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.962611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.962724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.962750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.962828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.962854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.962972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.962999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.963092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.963120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.963239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.963265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.963391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.963419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.963527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.963561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.963644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.963670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.963755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.963783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.963898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.963924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.964033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.964060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.964148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.964175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.964301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.964329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.964439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.964465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.964558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.964587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.964670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.964696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.964807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.964833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.964912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.964938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.965053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.965081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.965175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.965204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.965291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.965318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.965400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.965427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.965534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.965565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.965680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.965707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.965788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.965814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.965924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.965952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.966042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.966068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.966183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.966212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.966297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.966323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.966422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.966462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.966560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.966588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.966674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.966707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.966797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-10 04:14:05.966824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-10 04:14:05.966906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.966932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.967013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.967039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.967149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.967175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.967271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.967311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.967434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.967463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.967587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.967616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.967734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.967760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.967873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.967898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.967986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.968015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.968097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.968125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.968212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.968242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.968334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.968363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.968459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.968484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.968567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.968594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.968682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.968708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.968819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.968845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.968956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.968982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.969096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.969122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.969209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.969234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.969330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.969369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.969514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.969541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.969642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.969669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.969779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.969806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.969887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.969912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.970052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.970078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.970166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.970197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.970290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.970330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.970451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.970480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.970563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.970592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.970679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.970706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.970794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.970820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.970937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.970964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.971079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.971104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.971220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.971247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.971333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.971359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.971441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.971468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.971624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.971666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.971767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.971794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.971904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-10 04:14:05.971931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-10 04:14:05.972048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.972074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.972153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.972180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.972286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.972313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.972413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.972453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.972557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.972588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.972705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.972734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.972848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.972875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.972981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.973007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.973092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.973119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.973227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.973253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.973372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.973402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.973520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.973552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.973673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.973698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.973777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.973805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.973944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.973970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.974058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.974084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.974196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.974222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.974311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.974337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.974422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.974448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.974590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.974616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.974707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.974733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.974813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.974839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.974922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.974948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.975036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.975062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.975144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.975170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.975281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.975307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.975419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.975453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.975542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.975576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.975667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.975695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.975775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.975802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.975925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-10 04:14:05.975952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-10 04:14:05.976038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.976064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.976146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.976172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.976297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.976336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.976432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.976471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.976590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.976618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.976698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.976725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.976868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.976894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.976978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.977005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.977113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.977140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.977235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.977263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.977377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.977406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.977492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.977518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.977632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.977659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.977737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.977764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.977843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.977869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.977986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.978011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.978104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.978132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.978248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.978276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.978369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.978396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.978509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.978536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.978633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.978660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.978773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.978800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.978901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.978933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.979024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.979052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.979166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.979192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.979299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.979325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.979401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.979428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.979537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.979571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.979680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.979706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.979817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.979844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.979953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.979979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.980067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.980095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.980207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.980234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.980341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.980368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.980452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.980478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.980592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.980619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.980739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.980766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.980879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.980905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-10 04:14:05.981041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-10 04:14:05.981068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.981181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.981207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.981288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.981317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.981397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.981425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.981504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.981531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.981656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.981682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.981770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.981796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.981909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.981935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.982026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.982053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.982195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.982222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.982333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.982360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.982479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.982506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.982616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.982655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.982805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.982832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.982914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.982941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.983061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.983087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.983167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.983192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.983307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.983335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.983420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.983446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.983539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.983586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.983683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.983712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.983849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.983876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.983960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.983986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.984098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.984126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.984208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.984241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.984332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.984361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.984448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.984475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.984586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.984614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.984721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.984747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.984859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.984886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.984969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.984995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.985074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.985101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.985187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.985215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.985293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.985320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.985408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.985434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.985508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.985534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.985619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.985646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.985729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.985756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.985842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.985869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-10 04:14:05.985951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-10 04:14:05.985978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.986081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.986108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.986136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.926 [2024-12-10 04:14:05.986169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.926 [2024-12-10 04:14:05.986184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:11.926 [2024-12-10 04:14:05.986195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:11.926 [2024-12-10 04:14:05.986191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.986206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.926 [2024-12-10 04:14:05.986217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.986298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.986322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.986427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.986452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.986570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.986598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.986724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.986750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.986860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.986886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.986964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.986989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.987075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.987102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.987176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.987204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.987314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.987340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.987453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.987480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.987575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.987602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.987718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.987744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.987760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:11.926 [2024-12-10 04:14:05.987886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.987932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.987883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:11.926 [2024-12-10 04:14:05.987826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:11.926 [2024-12-10 04:14:05.987887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:11.926 [2024-12-10 04:14:05.988022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.988047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.988159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.988184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.988277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.988304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.988394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.988419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.988532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.988565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.988674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.988700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.988783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.988814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.988895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.988921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.989022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.989047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.989123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.989149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.989249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.989287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.989384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.989411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.989502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.989526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.989623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.989651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.989726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.989751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.989831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.989857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.989970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.989996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.990076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.990100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.990180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.990205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.990288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.990316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-10 04:14:05.990425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-10 04:14:05.990464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.990565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.990593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.990679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.990705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.990793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.990818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.990928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.990954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.991031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.991061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.991184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.991211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.991304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.991333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.991447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.991474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.991557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.991584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.991698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.991723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.991803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.991829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.991906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.991934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.992023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.992049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.992153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.992179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.992257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.992282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.992372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.992413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.992499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.992527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.992625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.992654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.992736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.992762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.992846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.992872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.992990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.993017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.993100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.993125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.993223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.993251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.993354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.993393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.993485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.993513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.993610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.993640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.993720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.993747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.993825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.993852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.993945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.993973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.994106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.994135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.994222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.994248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.994331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.994357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.994467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.994493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.994584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.994612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.994696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.994723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.994811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.994840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.994935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.994961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.995079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.995106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-10 04:14:05.995190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-10 04:14:05.995219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.995313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.995340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.995423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.995449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.995533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.995565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.995645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.995670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.995780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.995805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.995888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.995914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.995996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.996023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.996104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.996132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.996220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.996250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.996329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.996356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.996442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.996470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.996568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.996596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.996679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.996705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.996823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.996851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.996969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.996997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.997120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.997146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.997227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.997254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.997397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.997425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.997518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.997550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.997633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.997659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.997766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.997794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.997889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.997915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.998000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.998027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.998139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.998167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.998251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.998279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.998395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.998421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.998506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.998536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.998619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.998644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.998732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.998758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.998848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.998874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.998981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.999008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.999121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.999150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.999232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-10 04:14:05.999259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-10 04:14:05.999369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:05.999395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:05.999501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:05.999527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:05.999628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:05.999655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:05.999770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:05.999796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:05.999878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:05.999905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:05.999995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.000022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.000149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.000188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.000295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.000323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.000411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.000439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.000523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.000558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.000649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.000674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.000760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.000787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.000870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.000897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.000976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.001002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.001121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.001150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.001232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.001258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.001338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.001368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.001480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.001506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.001600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.001626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.001738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.001764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.001844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.001874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.001955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.001982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.002093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.002118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.002205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.002232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.002320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.002349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.002444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.002471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.002576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.002603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.002693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.002720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.002798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.002824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.002901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.002928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.003007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.003033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.003145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.003171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.003252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-10 04:14:06.003278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-10 04:14:06.003403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.003443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.003566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.003596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.003682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.003709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.003787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.003813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.003920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.003946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.004034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.004061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.004147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.004173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.004299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.004339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.004438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.004465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.004548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.004576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.004654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.004682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.004763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.004789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.004862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.004888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.005003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.005030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.005125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.005156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.005247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.005275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.005390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.005416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.005527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.005559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.005642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.005669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.005780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.005806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.005900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.005927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.006042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.006068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.006164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.006191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.006276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.006301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.006384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.006411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.006530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.006561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.006643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.006669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.006761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.006792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.006884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.006910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.006986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.007013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.007096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.007123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.007234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-10 04:14:06.007261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-10 04:14:06.007355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.007381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.007469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.007495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.007588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.007615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.007702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.007728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.007812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.007836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.007953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.007979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.008099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.008126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.008234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.008260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.008348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.008388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.008490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.008518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.008658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.008698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.008822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.008850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.008935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.008962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.009045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.009071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.009156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.009185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.009274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.009304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.009390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.009419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.009511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.009538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.009629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.009654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.009770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.009796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.009907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.009933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.010022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.010047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.010129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.010161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.010246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.010273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.010417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.010445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.010530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.010561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.010640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.010667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.010749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.010774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.010855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.010880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.010961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.010990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.011112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.011139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.011224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.011249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.011355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.011381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-10 04:14:06.011469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-10 04:14:06.011495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.011590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.011620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.011720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.011747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.011845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.011873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.011955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.011983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.012064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.012090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.012164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.012191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.012305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.012332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.012420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.012447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.012531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.012564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.012669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.012696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.012779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.012804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.012885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.012910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.013031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.013059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.013173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.013199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.013314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.013340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.013436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.013463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.013560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.013590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.013684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.013713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.013802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.013829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.013932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.013958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.014068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.014094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.014167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.014193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.014307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.014332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.014436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.014462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.014558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.014588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.014682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.014710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.014801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.014827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.014942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.014968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.015082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.015114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.015201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.015230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.015343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.015370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.015499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.015538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.015646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.015675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-10 04:14:06.015753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-10 04:14:06.015779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.015900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.015927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.016011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.016039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.016114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.016140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.016222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.016248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.016343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.016370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.016450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.016478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.016597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.016625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.016714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.016740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.016825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.016851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.016929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.016956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.017067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.017100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.017185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.017212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.017297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.017324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.017437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.017475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.017598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.017625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.017719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.017746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.017864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.017893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.017976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.018003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.018120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.018147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.018231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.018258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.018374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.018404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.018522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.018559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.018645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-10 04:14:06.018684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-10 04:14:06.018779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.018807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.018898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.018925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.019033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.019059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.019151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.019179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.019270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.019298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.019390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.019418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.019531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.019564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.019670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.019697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.019787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.019813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.019909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.019936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.020013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.020039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.020122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.020149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.020270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.020296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.020381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.020407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.020498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.020524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.020658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.020686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.020782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.020821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.020935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.020963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.021086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.021113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.021225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.021251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.021336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.021362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.021452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.021477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.021567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.021595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.021708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.021734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.021825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.021851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.021935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.021961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.022075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.022101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.022211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.022240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.022360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.022388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.022506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.022532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.022626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.022652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.022736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.022762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.022838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.022864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.022975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.023002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-10 04:14:06.023118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-10 04:14:06.023145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.023221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.023247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.023325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.023352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.023437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.023463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.023582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.023613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.023703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.023729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.023814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.023841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.023921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.023948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.024041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.024068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.024189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.024218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.024327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.024366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.024447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.024474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.024578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.024606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.024718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.024744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.024824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.024851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.024936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.024961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.025057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.025085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.025172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.025198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.025312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.025338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.025424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.025451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.025542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.025575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.025688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.025714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.025802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.025828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.025905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.025932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.026018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.026044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.026160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.026185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.026265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.026293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.026380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.026406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.026501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.026529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.026627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.026653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-10 04:14:06.026744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-10 04:14:06.026770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.026879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.026910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.026993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.027019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.027101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.027128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.027213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.027240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.027362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.027391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.027496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.027536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.027639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.027665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.027740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.027766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.027856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.027882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.027965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.027991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.028074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.028099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.028213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.028241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.028344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.028384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.028480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.028509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.028604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.028630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.028715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.028741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.028829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.028855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.028939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.028965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.029048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.029074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.029157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.029181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.029287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.029314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.029403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.029432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.029524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.029561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.029649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.029677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.029762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.029789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.029875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.029900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.029986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.030012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.030101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.030128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.030203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.030229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.030313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.030340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.030429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.030454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.030534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.030565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.030658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.030685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.030791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.030817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-10 04:14:06.030898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-10 04:14:06.030923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.031003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.031029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.031107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.031133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.031250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.031278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.031362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.031389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.031507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.031535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.031653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.031683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.031765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.031790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.031916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.031943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.032029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.032056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.032178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.032208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.032296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.032324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.032414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.032438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.032568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.032595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.032678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.032704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.032823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.032850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.032926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.032952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.033030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.033057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.033146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.033172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.033262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.033290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.033382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.033410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.033489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.033516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.033604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.033630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.033708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.033734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.033819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.033845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.033924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.033949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.034032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.034059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.034138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.034163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.034251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.034277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.034371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.034401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.034501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.034529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.034638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.034678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.034764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.034792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-10 04:14:06.034869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-10 04:14:06.034901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.034982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.035008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.035121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.035147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.035238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.035264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.035358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.035385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.035474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.035501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.035616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.035643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.035727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.035752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.035837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.035862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.035948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.035974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.036056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.036082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.036164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.036190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.036282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.036322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.036420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.036449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.036554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.036583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.036664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.036689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.036774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.036800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.036890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.036916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.037000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.037026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.037107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.037132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.037204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.037229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.037349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.037374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.037479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.037508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.037607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.037635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.037722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.037749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.037860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.037885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.037976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.038000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.038120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.038151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.038264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-10 04:14:06.038290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-10 04:14:06.038379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.038406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.038501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.038526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.038621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.038646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.038732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.038757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.038845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.038870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.038953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.038978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.039064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.039090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.039164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.039188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.039265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.039292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.039386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.039411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.039499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.039524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.039631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.039657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.039745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.039770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.039846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.039871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.039949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.039973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.040052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.040076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.040167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.040195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.040287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.040314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.040412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.040440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.040527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.040567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.040648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.040674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.040763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.040788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.040867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.040891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.040975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.041002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.041092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.041119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.041202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.041228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.041325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.041353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.041442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.041469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.041565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.041599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.041794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.041821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.041906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.041932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.042042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.042068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.042156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.042183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.042266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.042296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.042381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.042409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.042496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.042522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.042611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.042641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.042752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.042778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.042865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-10 04:14:06.042894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-10 04:14:06.042980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.043005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.043091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.043117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.043230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.043255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.043335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.043361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.043444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.043468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.043577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.043603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.043690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.043718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.043798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.043822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.043907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.043939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.044045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.044072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.044181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.044207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.044292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.044319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.044411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.044451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.044542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.044578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.044670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.044700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.044780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.044804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.044887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.044912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.045000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.045024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.045136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.045162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.045246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.045275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.045396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.045423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.045524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.045557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.045648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.045672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.045779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.045804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.045914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.045939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.046030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.046059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.046140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.046175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.046259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.046284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.046367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.046395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.046482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.046510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.046607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.046634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.046723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.046748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.046838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.046863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.046950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.046975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.047056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.047081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.047171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.047197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.047310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.047336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.047449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.047475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.047561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-10 04:14:06.047596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-10 04:14:06.047686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.047712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.047799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.047824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.047905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.047931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.048029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.048055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.048145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.048173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.048282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.048319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.048421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.048448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.048538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.048570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.048684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.048710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.048788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.048814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.048924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.048949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.049034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.049059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.049146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.049171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.049252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.049278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.049368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.049395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.049492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.049520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.049620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.049647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.049753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.049778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.049868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.049895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.049976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.050002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.050089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.050116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.050237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.050262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.050346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.050371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.050465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.050490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.050577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.050602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.050680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.050704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.050791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.050815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.050891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.050916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.051003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.051028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.051115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.051141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.051239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.051266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.051385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.051413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.051505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.051530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.051620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.051646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.051738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.051765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.051853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.051880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.051967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.051991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.052084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.052111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-10 04:14:06.052217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-10 04:14:06.052242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.052330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.052355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.052447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.052472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.052595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.052624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.052732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.052758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.052832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.052857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.052948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.052972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.053053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.053079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.053200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.053226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.053316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.053343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.053439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.053477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.053567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.053594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.053674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.053699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.053795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.053819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.053930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.053955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.054052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.054080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.054166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.054197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.054284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.054310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.054431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.054457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.054662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.054688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.054774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.054800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.054913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.054938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.055040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.055066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.055163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.055202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.055324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.055350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.055463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.055488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.055578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.055604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.055696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.055721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.055808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.055833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.055916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.055941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.056047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.056085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.056180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.056206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.056322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.056346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.056428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.056453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.056570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.056599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.056697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.056722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.056810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.056835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.056942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.056967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.057063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.057091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-10 04:14:06.057178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-10 04:14:06.057204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.057288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.057313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.057392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.057417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.057501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.057526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.057651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.057678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.057765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.057790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.057873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.057898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.057982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.058007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.058087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.058111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.058187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.058212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.058298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.058323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.058407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.058435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.058528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.058564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.058652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.058683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.058766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.058793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.058878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.058904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.058988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.059014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.059109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.059134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.059224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.059249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.059349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.059387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.059476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.059502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.059629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.059656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.059743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.059769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.059846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.059871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.059951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.059976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.060057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.060084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.060176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.060201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.060286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.060311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.060392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.060417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.060504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.060531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.060631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.060658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.060753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.060782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-10 04:14:06.060900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-10 04:14:06.060925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.061016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.061041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.061132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.061157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.061241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.061266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.061347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.061372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.061455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.061482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.061575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.061601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.061687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.061713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.061788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.061814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.061928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.061952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.062036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.062062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.062145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.062171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.062289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.062318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.062447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.062488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.062575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.062602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.062697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.062722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.062796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.062821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.062902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.062927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.063007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.063032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.063106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.063132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.063252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.063281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.063379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.063413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.063506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.063531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.063627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.063652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.063743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.063767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.063857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.063884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.064007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.064033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.064146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.064171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.064265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.064290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.064379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.064417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.064514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.064542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.064657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.064690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.064780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.064806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.064888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.064913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.065021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.065051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.065137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.065163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.065244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.065273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.065364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.065389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.065471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-10 04:14:06.065497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-10 04:14:06.065599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.065626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.065711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.065737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.065845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.065871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.065962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.065987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.066074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.066102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.066228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.066254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.066334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.066359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.066439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.066464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.066540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.066571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.066670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.066695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.066787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.066812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.066895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.066919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.067031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.067056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.067130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.067154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.067244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.067269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.067349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.067374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.067494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.067522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.067625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.067653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.067750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.067788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.067883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.067909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.068006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.068032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.068120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.068145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.068230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.068257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.068334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.068359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.068436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.068460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.068562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.068588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.068670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.068695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.068775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.068803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.068913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.068939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.069073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.069114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.069207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.069234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.069349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.069380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.069513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.069539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.069638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.069664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.069749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.069774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.069854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.069879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.069954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.069979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.070100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.070127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.070242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-10 04:14:06.070268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-10 04:14:06.070351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.070377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.070463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.070494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.070581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.070607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.070717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.070741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.070832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.070857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.070945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.070970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.071051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.071075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.071156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.071181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.071255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.071280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.071393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.071418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.071495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.071522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.071614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.071641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.071729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.071754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.071835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.071860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.071972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.071998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.072096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.072121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.072202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.072229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.072315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.072341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.072425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.072450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.072562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.072587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.072666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.072690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.072783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.072807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.072895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.072921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.073002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.073028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.073119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.073152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.073244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.073271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.073388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.073414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.073507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.073532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.073626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.073657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.073740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.073766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.073858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.073882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.073978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.074006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.074084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.074110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.074194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.074220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.074300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.074325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.074463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.074487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.074586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.074611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.074694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.074718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.074794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-10 04:14:06.074819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-10 04:14:06.074905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.074929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.075047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.075073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.075158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.075185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.075287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.075314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.075516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.075543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.075644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.075669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.075786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.075815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.075906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.075932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.076010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.076035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.076140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.076165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.076246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.076272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.076354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.076379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.076493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.076517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.076622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.076647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.076729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.076755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.076866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.076891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.077008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.077034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.077140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.077178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.077267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.077293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.077381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.077406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.077490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.077515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.077613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.077642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.077729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.077754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.077833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.077859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.077949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.077976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.078063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.078089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.078180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.078205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.078291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.078316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.078438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.078476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.078571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.078602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.078687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.078712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.078798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.078823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.078949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.078975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.079065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.079092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-10 04:14:06.079179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-10 04:14:06.079204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.079315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.079340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.079423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.079448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.079523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.079554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.079644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.079669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.079745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.079769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.079859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.079883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.079993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.080017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.080095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.080121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.080246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.080271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.080383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.080407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.080521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.080552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.080631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.080656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.080740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.080765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.080881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.080907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.080992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.081016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.081101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.081125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.081213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.081239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.081329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.081354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.081455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.081493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.081591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.081618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.081704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.081730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.081820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.081850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.081960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.081985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.082064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.082089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.082166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.082192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.082284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.082316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.082445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.082472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.082569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.082595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.082682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.082708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.082789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.082815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.082898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.082924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.083017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.083044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.083137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.083178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.083310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.083337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.083424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.083450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.083550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.083581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.083698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.083722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-10 04:14:06.083811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-10 04:14:06.083837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.083931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.083957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.084046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.084074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.084158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.084185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.084282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.084319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.084437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.084462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.084542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.084575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.084664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.084688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.084768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.084793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.084869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.084893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.084980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.085004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.085088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.085115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.085197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.085222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.085327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.085352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.085438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.085462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.085555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.085580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.085668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.085695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.085783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.085808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.085895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.085920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.085999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.086024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.086154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.086181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.086285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.086309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.086396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.086423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.086501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.086526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.086652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.086677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.086772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.086803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.086882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.086907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.087000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.087026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.087110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.087137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.087218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.087243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.087326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.087354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.087441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.087466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.087561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.087594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.087688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.087715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.087797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.087821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.087898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.087926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.088033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.088059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-10 04:14:06.088140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-10 04:14:06.088171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.088264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.088291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.088382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.088427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.088559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.088588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.088681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.088709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.088797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.088824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.088940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.088967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.089047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.089072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.089191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.089217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.089308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.089335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.089430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.089469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.089556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.089584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.089677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.089702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.089816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.089840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.089936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.089970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.090057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.090082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.090194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.090218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.090303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.090327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.090404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.090428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.090510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.090533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.090654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.090678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.090768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.090792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.090872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.090896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.090983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.091006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.091088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.091113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.091225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.091249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.091326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.091351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.091449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.091484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.091593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.091621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.091726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.091764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.091886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.091912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.091990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.092015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.092120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.092145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.092226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.092250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.092331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.092355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.092437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.092460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.092554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.092584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.092680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.092708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.092790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.092816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-10 04:14:06.092904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-10 04:14:06.092929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.093007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.093032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.093125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.093157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.093271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.093296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.093416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.093441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.093525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.093561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.093649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.093677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.093759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.093784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.093856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.093881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.093970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.093994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.094081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.094109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.094191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.094216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.094291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.094319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.094397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.094423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.094501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.094529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.094617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.094643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.094736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.094762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.094843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.094867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.094949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.094975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.095090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.095116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.095195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.095220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.095307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.095332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.095437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.095464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.095568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.095596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.095682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.095708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.095791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.095817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.095932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.095958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.096037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.096061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.096143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.096168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.096247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.096277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.096357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.096380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.096457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.096481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.096557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.096582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.096663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.096688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.096767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.096792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.096870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.096896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.096982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.097007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.097085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.097108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.097212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.097237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-10 04:14:06.097330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-10 04:14:06.097361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.097455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.097488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.097597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.097625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.097711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.097738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.097824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.097851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.097937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.097962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.098042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.098068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.098164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.098192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.098274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.098300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.098383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.098413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.098494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.098518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.098613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.098639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.098721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.098747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.098832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.098857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.098944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.098968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.099045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.099069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.099184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.099211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.099304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.099332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.099424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.099451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.099536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.099571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.099653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.099678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.099751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.099776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.099864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.099888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.100001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.100027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.100112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.100139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.100227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.100254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.100351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.100388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.100481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.100507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.100598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.100625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.100706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.100731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.100823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.100850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.100932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.100957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.101047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.101074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.101150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.101175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.101261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-10 04:14:06.101286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-10 04:14:06.101368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.101392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.101481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.101507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.101607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.101633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.101718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.101742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.101819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.101843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.101957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.101981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.102072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.102099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.102186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.102212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.102317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.102355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.102453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.102479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.102568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.102593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.102673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.102698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.102789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.102813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.102893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.102916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.102999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.103022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.103112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.103136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.103227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.103254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.103345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.103371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.103461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.103487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.103573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.103599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.103675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.103700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.103805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.103829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.103917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.103946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.104052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.104076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.104160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.104184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.104275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.104300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.104387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.104418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.104512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.104542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.104649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.104675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.104760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.104786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.104900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.104930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.105025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.105050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.105143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.105168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.105257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.105280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.105361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.105387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.105470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.105495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.105591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.105618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.105730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.105755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-10 04:14:06.105843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-10 04:14:06.105869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.105956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.105981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.106068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.106093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.106183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.106211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.106300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.106326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.106407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.106433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.106543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.106574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.106645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.106670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.106761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.106786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.106874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.106898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.106975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.106999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.107129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.107163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.107252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.107276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.107364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.107390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.107474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.107499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.107590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.107617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.107708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.107734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.107822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.107848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.107933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.107957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.108037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.108062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.108144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.108167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.108281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.108306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.108384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.108410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.108492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.108517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.108609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.108635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.108731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.108756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.108837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.108861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.108937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.108961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.109044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.109068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.109144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.109169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.109259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.109284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.109363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.109386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.109474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.109506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.109597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.109623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.109697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.109722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.109807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.109838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.109926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.109951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.110038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.110062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.110151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.110176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-10 04:14:06.110258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-10 04:14:06.110284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.110384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.110418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.110509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.110535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.110630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.110656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.110746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.110777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.110874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.110900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.110982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.111008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.111089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.111118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.111220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.111246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.111372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.111409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.111502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.111529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.111625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.111650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.111735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.111765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.111854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.111879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.111960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.111984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.112060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.112084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.112164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.112188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.112266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.112291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.112377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.112402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.112515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.112543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.112629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.112654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.112738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.112762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.112844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.112869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.112957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.112986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.113073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.113099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.113186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.113215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.113304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.113330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.113409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.113435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.113517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.113542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.113662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.113688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.113766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.113792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.113873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.113896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.113973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.114000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.114201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.114230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.114319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.114347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.114432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.114459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.114564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.114591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.114673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.114700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.114781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.114806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-10 04:14:06.114897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-10 04:14:06.114922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.115015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.115045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.115151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.115178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.115263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.115288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.115373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.115398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.115483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.115509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.115623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.115661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.115756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.115783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.115898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.115923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.116002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.116027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.116103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.116127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.116212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.116243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.116323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.116350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.116442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.116467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.116567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.116621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.116710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.116739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.116829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.116855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.116939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.116965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.117041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.117067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.117264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.117293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.117385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.117412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.117490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.117515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.117613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.117638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.117717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.117742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.117815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.117839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.117918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.117942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.118030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.118055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.118177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.118206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.118298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.118326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.118410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.118435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.118527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.118557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.118644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.118671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.118785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.118811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.118889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.118914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.118997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.119025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-10 04:14:06.119109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-10 04:14:06.119136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.119233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.119261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.119342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.119367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.119454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.119479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.119568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.119594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.119678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.119709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.119796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.119822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.119910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.119936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.120026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.120054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.120134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.120159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.120258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.120297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.120387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.120414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.120500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.120525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.120621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.120647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.120732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.120760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.120841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.957 [2024-12-10 04:14:06.120866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.120978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.121003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.957 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.121092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.121121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:11.957 [2024-12-10 04:14:06.121212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.121243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.121333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:11.957 [2024-12-10 04:14:06.121362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.121445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.121471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:11.957 [2024-12-10 04:14:06.121566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.121593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.121671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.121697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.121787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.121815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.121891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.121915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.122024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.122050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.122139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.122164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.122245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.122272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.122351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.122376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.122458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.122485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.122590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.122630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.122725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.122752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.122833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.122858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.122937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.122963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-10 04:14:06.123090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-10 04:14:06.123116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.123222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.123248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.123334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.123361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.123455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.123485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.123584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.123611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.123694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.123721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.123810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.123836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.123916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.123941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.124036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.124064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.124144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.124170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.124271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.124299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.124385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.124411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.124500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.124527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.124659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.124691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.124786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.124815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.124927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.124954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.125047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.125075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.125157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.125184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.125262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.125286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.125397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.125422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.125502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.125526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.125619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.125644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.125726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.125758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.125891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.125919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.126016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.126043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.126126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.126152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.126234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.126260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.126345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.126377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.126497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.126523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.126628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.126658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.126746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.126773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.126887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.126913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.126999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.127025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.127108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.127136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.127276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.127302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.127387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.127413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-10 04:14:06.127526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-10 04:14:06.127563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.127658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.127684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.127769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.127795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.127890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.127916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.127999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.128023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.128126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.128152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.128239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.128265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.128346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.128371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.128472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.128512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.128613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.128641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.128728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.128754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.128835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.128859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.128938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.128964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.129044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.129068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.129189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.129216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.129294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.129318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.129406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.129429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.129513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.129541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.129637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.129662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.129752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.129776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.129856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.129879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.129965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.129993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.130077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.130104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.130187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.130211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.130296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.130321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.130401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.130425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.130510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.130537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.130645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.130680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.130773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.130802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.130885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.130911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.130999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.131025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.131108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.131133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.131237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.131263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.131349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.131390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.131479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.131507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.131600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.131628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.131704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.131730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.131814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.131841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.131916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.131939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.959 [2024-12-10 04:14:06.132020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-12-10 04:14:06.132045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.959 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.132156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.132182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.132278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.132303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.132414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.132438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.132526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.132569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.132660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.132686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.132799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.132825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.132937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.132963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.133072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.133098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.133205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.133233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.133325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.133352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.133433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.133458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.133534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.133566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.133658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.133684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.133772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.133796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.133883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.133912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.133993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.134018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.134098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.134123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.134206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.134231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.134320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.134345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.134453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.134478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.134566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.134591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.134675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.134703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.134810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.134836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.134960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.134985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.135071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.135096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.135189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.135215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.135308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.135333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.135452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.135479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.135568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.135594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.135679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.135703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.135784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.135807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.135891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.135916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.135991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.136015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.136133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.136158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.136246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.136275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.136469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.136495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.136580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.136606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.136692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.136719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.136810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.136838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.136928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.136957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.960 [2024-12-10 04:14:06.137075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.960 [2024-12-10 04:14:06.137102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.960 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.137192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.137221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.137311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.137336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.137417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.137443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.137527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.137558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.137650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.137675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.137758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.137782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.137867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.137891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.137972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.138002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.138093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.138129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.138233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.138271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.138372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.138401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.138486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.138513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.138614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.138642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.138733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.138760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.138855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.138882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.138984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.139010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.139100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.139126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.139209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.139237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.139376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.139402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.139492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.139519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.139625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.139660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.139756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.139782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.139859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.139885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.139968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.139995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.140193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.140222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.140337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.140365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.140464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.140503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.140607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.140635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.140728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.140754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.140845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.140872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.140983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.141009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.141092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.141118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.141198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.141226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.141310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.141336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.141422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.141448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.141534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.141573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.961 [2024-12-10 04:14:06.141690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.961 [2024-12-10 04:14:06.141716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.961 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.141824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.141850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.141933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.141959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.142054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.142093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.142206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.142239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.142321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.142346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.142438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.142464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.142564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.142593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.142681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.142708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.142792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.142825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.142912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.142937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.143022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.143048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.143171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.143200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.143294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.143321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.143410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.143436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.143519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.143550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.143643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.143668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.143758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.143784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.143926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.143952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.144041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.144065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.144152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.144178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.144256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.144279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.144363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.144389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.144465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.144489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.144585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.144611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.144710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.144739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.144829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.144855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.144941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.144966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.145040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.145065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.145151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.145176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.145267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.145293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.145369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.145399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.145489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.145514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.145612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.145640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.145719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.145745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.145841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.145868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.146007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.146032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.146121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.146149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:11.962 [2024-12-10 04:14:06.146236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.146261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.146343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.146370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.146459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 [2024-12-10 04:14:06.146483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.146613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.962 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:11.962 [2024-12-10 04:14:06.146639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.962 qpair failed and we were unable to recover it. 00:26:11.962 [2024-12-10 04:14:06.146729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.146754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.963 [2024-12-10 04:14:06.146851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.146884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.146971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.146997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:11.963 [2024-12-10 04:14:06.147111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.147139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.147221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.147246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.147354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.147380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.147457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.147483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.147652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.147679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.147756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.147781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.147879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.147906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.148017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.148043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.148131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.148169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.148273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.148315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.148414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.148443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.148531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.148572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.148678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.148710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.148805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.148831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.148917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.148944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.149035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.149061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.149137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.149164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.149253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.149279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.149377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.149408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.149539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.149578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.149673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.149700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.149789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.149816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.149903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.149930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.150017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.150043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.150125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.150150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.150242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.150267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.150376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.150401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.150489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.150513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.150612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.150637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.150719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.150744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.150840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.150867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.150953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.150978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.151060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.151085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.151163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.151190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.151271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.151296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.151379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.963 [2024-12-10 04:14:06.151403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.963 qpair failed and we were unable to recover it. 00:26:11.963 [2024-12-10 04:14:06.151489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.151515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.151634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.151671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.151780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.151807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.151902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.151927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.152032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.152056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.152141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.152166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.152245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.152280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.152378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.152405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.152493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.152519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.152629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.152654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.152731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.152757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.152847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.152874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.152964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.152989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.153110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.153139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.153265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.153291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.153481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.153512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.153616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.153643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.153756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.153781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.153862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.153888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.154002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.154027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.154111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.154137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.154234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.154271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.154367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.154395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.154482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.154507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.154589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.154614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.154690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.154714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.154820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.154858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.154947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.154974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.155058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.155085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.155178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.155203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.155289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.155314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.155397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.155421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.155534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.155565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.155647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.155672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.155754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.155779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.155867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.155890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.155999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.156023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.156099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.156122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.156209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.156232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.156318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.156351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.156443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.156470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.964 [2024-12-10 04:14:06.156574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.964 [2024-12-10 04:14:06.156601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.964 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.156694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.156724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.156806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.156831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.156917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.156944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5bb0000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.157034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.157061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.157144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.157170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.157262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.157289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.157367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.157392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.157492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.157532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.157657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.157686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.157771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.157797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.157893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.157919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.158201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.158237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.158328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.158355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.158465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.158492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.158627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.158666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.158752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.158781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.158896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.158923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.159282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.159311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.159392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.159419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.159504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.159532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.159672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.159703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.159800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.159835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.159915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.159942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.160039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.160067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.160163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.160190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.160332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.160360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.160449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.160488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.160605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.160633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.160718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.160744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.160839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.160865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.160942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.160968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.161053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.161083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.161169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.161195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.161294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.161321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.161409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.161436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.161522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.161556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.161649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.161676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.161783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.161808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.161888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.161915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.965 qpair failed and we were unable to recover it. 00:26:11.965 [2024-12-10 04:14:06.162019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.965 [2024-12-10 04:14:06.162045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.162131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.162163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.162244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.162271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.162348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.162373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.162448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.162473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.162562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.162589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.162678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.162706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.162840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.162866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.162948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.162975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.163060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.163087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.163205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.163233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.163319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.163344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.163426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.163452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.163535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.163567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.163661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.163687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.163806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.163832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.163910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.163937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.164011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.164038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.164119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.164144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.164228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.164253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.164334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.164359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.164469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.164494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.164586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.164617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.164707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.164732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.164812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.164838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.164951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.164977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.165051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.165077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.165149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.165174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.165265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.165296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.966 qpair failed and we were unable to recover it. 00:26:11.966 [2024-12-10 04:14:06.165377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.966 [2024-12-10 04:14:06.165402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.165482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.165508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.165614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.165643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.165724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.165749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.165836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.165862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.165941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.165967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.166041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.166066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.166170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.166196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.166305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.166330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.166416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.166441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.166524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.166555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.166648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.166674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.166756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.166783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.166879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.166905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.166992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.167017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.167101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.167127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.167203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.167228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.167322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.167350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.167442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.167481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.167581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.167615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.167711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.167742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.167875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.167902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.168095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.168123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.168211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.168237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.168319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.168346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.168457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.168487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.168574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.168602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.168692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.168718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.168835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.168862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.168950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.168975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.169053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.169079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.169165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.169192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.169274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.169301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.169386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.169412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.169519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.169550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.169648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.169675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.169758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.169784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.169872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.169897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.169996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.170027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.170124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.170157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.170238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.170265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.170346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.170371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.170451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.170477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.170562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.170588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.170686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.170713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.967 [2024-12-10 04:14:06.170804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.967 [2024-12-10 04:14:06.170830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.967 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.170905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.170932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.171011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.171036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.171152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.171181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.171264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.171291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.171407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.171440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.171536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.171571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.171655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.171682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.171769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.171795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.171880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.171908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.171991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.172017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.172118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.172145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.172231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.172259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.172351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.172376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.172479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.172505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.172588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.172614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.172727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.172753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.172845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.172871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.172955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.172981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.173057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.173082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.173163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.173188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.173278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.173304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.173414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.173439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.173518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.173552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.173637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.173663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.173736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.173763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.173851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.173876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.173949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.173974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.174087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.174113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.174198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.174224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.174309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.174338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.174419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.174445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.174538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.174582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.174669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.174695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.174771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.174797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.174915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.174942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.175024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.175050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.175137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.175162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.175269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.175301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.175393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.175419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.175500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.175525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.175621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.175646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.175735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.175760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.175861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.175886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.175973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.175998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.176075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.968 [2024-12-10 04:14:06.176099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.968 qpair failed and we were unable to recover it. 00:26:11.968 [2024-12-10 04:14:06.176176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.176200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.176280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.176305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.176508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.176536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.176639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.176665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.176789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.176820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.176934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.176961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.177047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.177073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.177157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.177189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.177273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.177298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.177382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.177407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.177495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.177520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.177612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.177637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.177718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.177743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.177822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.177847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.177933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.177960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.178055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.178080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.178165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.178196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.178286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.178311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.178401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.178426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.178513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.178537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.178651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.178676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.178763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.178788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.178874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.178898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.179008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.179033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.179108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.179133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.179244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.179282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.179378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.179407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.179535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.179571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.179666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.179690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.179787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.179811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.179918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.179943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.180038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.180064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.180149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.180173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.180258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.180282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.180361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.180387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.180486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.180525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.180634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.180660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.180744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.180770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.180844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.180869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.180947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.969 [2024-12-10 04:14:06.180971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.969 qpair failed and we were unable to recover it. 00:26:11.969 [2024-12-10 04:14:06.181111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.181137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.181223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.181248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.181332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.181361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.181476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.181501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.181597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.181623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.181712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.181737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.181826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.181850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.181925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.181949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.182055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.182079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.182170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.182194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.182272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.182296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.182407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.182432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.182517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.182541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.182639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.182663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.182743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.182768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.182841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.182866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.182984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.183009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.183096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.183121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.183230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.183256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.183343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.183368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.183449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.183473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.183568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.183593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.183676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.183701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.183776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.183801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.183888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.183913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.184005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.184030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.184107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.184131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.184214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.184242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.184319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.184343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.184464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.184501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.184603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.184628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.184739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.184764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.184838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.184863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.184952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.184978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.185070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.185094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.185169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.185193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.185281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.185306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.185398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.185422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.185531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.185560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.185642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.185666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.185757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.185783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.185870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.185895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.185973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.185998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.186094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.186119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.186202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.186226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.970 [2024-12-10 04:14:06.186314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.970 [2024-12-10 04:14:06.186338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.970 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.186416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.186444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.186521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.186551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.186641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.186666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.186747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.186772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.186881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.186906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.187000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.187025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.187101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.187126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.187205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.187229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.187325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.187353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.187450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.187475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.187594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.187626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.187725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.187750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.187848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.187886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.187970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.187997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.188085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.188110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.188217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.188242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.188326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.188351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.188444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.188469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.188584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.188610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.188719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.188744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.188822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.188846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.188924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.188948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.189028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.189052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.189165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.189189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.189275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.189301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.189377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.189402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.189516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.189557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.189647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.189672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.189756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.189780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.189869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.189895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.190008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.190034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.190108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.190133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.190211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.190236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.190318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.190343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.190420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.190444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.190518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.190563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.190676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.190701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.190817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.190852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 Malloc0 00:26:11.971 [2024-12-10 04:14:06.190970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.190995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.191085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.191109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.191190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.191215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.191304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.191329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.971 [2024-12-10 04:14:06.191405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.191429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:11.971 [2024-12-10 04:14:06.191510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.971 [2024-12-10 04:14:06.191535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.971 qpair failed and we were unable to recover it. 00:26:11.971 [2024-12-10 04:14:06.191632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.972 [2024-12-10 04:14:06.191657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.191741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.191765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.972 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.191866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.191890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.191968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.191995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.192094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.192118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.192210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.192235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.192314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.192339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.192435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.192459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.192535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.192571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.192659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.192685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.192775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.192800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.192886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.192911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.193005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.193038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.193110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.193135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.193212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.193236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.193325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.193350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.193430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.193456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.193530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.193561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.193647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.193678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.193758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.193782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.193867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.193893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.193982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.194010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.194099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.194126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.194205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.194230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.194323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.194348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.194429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.194453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.194532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.194567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.194653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.194678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 [2024-12-10 04:14:06.194663] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.194794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.194819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.194907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.194932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.195017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.195042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.195130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.195160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.195254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.195279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.195395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.195422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.195503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.195533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.195642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.195670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.195760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.195786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.195876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.195900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.195980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.196006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.196099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.196124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.196207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.196233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.196319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.196344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.196428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.196453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.196534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.196565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.972 qpair failed and we were unable to recover it. 00:26:11.972 [2024-12-10 04:14:06.196649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.972 [2024-12-10 04:14:06.196674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.196768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.196797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.196889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.196918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.197010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.197036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.197141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.197168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.197258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.197283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.197370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.197395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.197478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.197504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.197593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.197623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.197714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.197741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.197821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.197846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.197932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.197958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.198043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.198068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.198159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.198184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.198272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.198302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.198414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.198439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.198521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.198551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.198640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.198664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.198752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.198778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.198856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.198880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.198966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.198993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.199075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.199102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.199190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.199217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.199298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.199324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.199403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.199428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.199537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.199567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.199652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.199676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.199752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.199776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.199860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.199885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.199977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.200003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.200081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.200105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.200211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.200237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.200321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.200346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.200429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.200454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.200575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.200602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.200692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.200716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.973 [2024-12-10 04:14:06.200797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.973 [2024-12-10 04:14:06.200821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.973 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.200902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.200926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.201004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.201028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.201141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.201166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.201257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.201281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.201395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.201425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.201510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.201536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.201618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.201642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.201720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.201746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.201818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.201843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.201921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.201946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.202059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.202085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.202161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.202185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.202270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.202295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.202369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.202393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.202468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.202492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.202581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.202607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.202684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.202708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.202790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.202819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.974 [2024-12-10 04:14:06.202933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.202958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.203054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:11.974 [2024-12-10 04:14:06.203079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.203150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.203175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.974 [2024-12-10 04:14:06.203269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.203298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.203379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.203406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.974 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.203498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.203526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.203621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.203647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.203760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.203785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.203858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.203883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.203971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.203995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.204081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.204106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.204185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.204209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.204301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.204326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.204398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.204422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.204498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.204523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.204610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.204635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.204743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.204782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.204867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.204895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.204976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.205002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.205078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.205104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.205214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.205240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.205329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.205356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.205440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.205465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.205555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.205581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.205666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.974 [2024-12-10 04:14:06.205693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.974 qpair failed and we were unable to recover it. 00:26:11.974 [2024-12-10 04:14:06.205814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.205840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.205916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.205940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.206046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.206071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.206156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.206181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.206270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.206295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.206376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.206401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.206486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.206512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.206600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.206625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.206706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.206730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.206810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.206834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.206923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.206948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.207063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.207089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.207200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.207225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.207309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.207338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.207429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.207453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.207535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.207576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.207665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.207689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.207783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.207809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.207887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.207912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.207997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.208024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.208101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.208125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.208208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.208235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.208323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.208349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.208441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.208468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.208556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.208583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.208668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.208693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.208775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.208802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.208923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.208949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.209033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.209059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.209164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.209190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.209274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.209299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.209385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.209410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.209495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.209519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.209618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.209645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.209725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.209749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.209828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.209854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.209936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.209962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.210052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.210078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.210166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.210192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.210275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.210300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.210379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.210404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.210485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.210510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.210594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.210619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.210699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.210724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.210806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.210832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.975 [2024-12-10 04:14:06.210918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.975 [2024-12-10 04:14:06.210943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.975 qpair failed and we were unable to recover it. 00:26:11.976 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.976 [2024-12-10 04:14:06.211022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.211047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.211158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.211184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.211276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.211304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.976 [2024-12-10 04:14:06.211392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.211417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.211496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.211523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:11.976 [2024-12-10 04:14:06.211630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.211657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.211750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.211776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.211855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.211881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.211971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.211997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.212110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.212136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.212225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.212251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.212323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.212347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.212440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.212465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.212558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.212583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.212671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.212697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.212776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.212802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.212883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.212908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.212991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.213017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.213106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.213131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.213223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.213253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.213361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.213389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.213474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.213500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.213584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.213609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.213688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.213714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.213795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.213820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.213897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.213923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.214005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.214031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.214107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.214132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.214219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.214247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.214339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.214364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.214471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.214498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.214575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.214600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.214687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.214711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.214806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.214831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.214941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.214967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.215055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.215080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.215174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.215199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.215281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.215308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.215392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.215418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.215503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.215530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.215625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.215651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.215727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.215752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.215843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.215867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.976 [2024-12-10 04:14:06.215949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.976 [2024-12-10 04:14:06.215975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.976 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.216085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.216110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.216216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.216241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.216328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.216356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.216440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.216467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.216570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.216597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.216687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.216712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.216789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.216815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.216895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.216919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.217017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.217042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.217135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.217178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.217284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.217311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.217402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.217429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.217515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.217541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.217624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.217648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.217736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.217762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.217883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.217915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.218046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.218072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.218170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.218196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.218283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.218308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.218405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.218432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.218517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.218549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.218659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.218685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.218767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.218793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.218874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.218899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.977 [2024-12-10 04:14:06.218982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.219010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:11.977 [2024-12-10 04:14:06.219097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.219126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.219198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.219222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.219307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:11.977 [2024-12-10 04:14:06.219339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.219433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.219458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.219551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.219581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.219685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.219712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.219792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.219818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.219906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.219932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.220017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.220043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.220121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.977 [2024-12-10 04:14:06.220147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.977 qpair failed and we were unable to recover it. 00:26:11.977 [2024-12-10 04:14:06.220229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.220258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.220344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.220370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.220453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.220478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.220560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.220613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.220694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.220718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.220799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.220830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.220944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.220969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.221050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.221075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.221163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.221190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.221277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.221304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.221401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.221435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.221526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.221559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.221675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.221703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.221785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.221813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.221936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.221963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.222075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.222102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba4000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.222180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.222209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.222344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.222370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.222477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.222503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.222606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.222631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.222716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-12-10 04:14:06.222741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559fa0 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.223233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.978 [2024-12-10 04:14:06.225495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.978 [2024-12-10 04:14:06.225637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.978 [2024-12-10 04:14:06.225668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.978 [2024-12-10 04:14:06.225683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.978 [2024-12-10 04:14:06.225695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:11.978 [2024-12-10 04:14:06.225729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.978 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:11.978 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.978 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:11.978 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.978 04:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2506052 00:26:11.978 [2024-12-10 04:14:06.235352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.978 [2024-12-10 04:14:06.235458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.978 [2024-12-10 04:14:06.235484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.978 [2024-12-10 04:14:06.235498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.978 [2024-12-10 04:14:06.235510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:11.978 [2024-12-10 04:14:06.235538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.245303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.978 [2024-12-10 04:14:06.245416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.978 [2024-12-10 04:14:06.245442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.978 [2024-12-10 04:14:06.245456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.978 [2024-12-10 04:14:06.245473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:11.978 [2024-12-10 04:14:06.245502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.255346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.978 [2024-12-10 04:14:06.255442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.978 [2024-12-10 04:14:06.255468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.978 [2024-12-10 04:14:06.255482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.978 [2024-12-10 04:14:06.255494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:11.978 [2024-12-10 04:14:06.255521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-12-10 04:14:06.265279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.978 [2024-12-10 04:14:06.265367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.978 [2024-12-10 04:14:06.265393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.978 [2024-12-10 04:14:06.265407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.978 [2024-12-10 04:14:06.265418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:11.978 [2024-12-10 04:14:06.265446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.978 qpair failed and we were unable to recover it. 00:26:12.240 [2024-12-10 04:14:06.275356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.240 [2024-12-10 04:14:06.275474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.240 [2024-12-10 04:14:06.275500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.240 [2024-12-10 04:14:06.275523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.240 [2024-12-10 04:14:06.275535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.240 [2024-12-10 04:14:06.275573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.240 qpair failed and we were unable to recover it. 00:26:12.240 [2024-12-10 04:14:06.285362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.240 [2024-12-10 04:14:06.285459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.240 [2024-12-10 04:14:06.285486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.240 [2024-12-10 04:14:06.285500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.240 [2024-12-10 04:14:06.285520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.240 [2024-12-10 04:14:06.285560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.240 qpair failed and we were unable to recover it. 00:26:12.240 [2024-12-10 04:14:06.295389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.240 [2024-12-10 04:14:06.295515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.240 [2024-12-10 04:14:06.295542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.240 [2024-12-10 04:14:06.295567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.240 [2024-12-10 04:14:06.295579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.241 [2024-12-10 04:14:06.295609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.241 qpair failed and we were unable to recover it. 00:26:12.241 [2024-12-10 04:14:06.305434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.241 [2024-12-10 04:14:06.305521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.241 [2024-12-10 04:14:06.305557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.241 [2024-12-10 04:14:06.305574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.241 [2024-12-10 04:14:06.305586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.241 [2024-12-10 04:14:06.305614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.241 qpair failed and we were unable to recover it. 00:26:12.241 [2024-12-10 04:14:06.315506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.241 [2024-12-10 04:14:06.315609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.241 [2024-12-10 04:14:06.315634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.241 [2024-12-10 04:14:06.315648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.241 [2024-12-10 04:14:06.315660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.241 [2024-12-10 04:14:06.315688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.241 qpair failed and we were unable to recover it. 00:26:12.241 [2024-12-10 04:14:06.325494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.241 [2024-12-10 04:14:06.325591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.241 [2024-12-10 04:14:06.325616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.241 [2024-12-10 04:14:06.325630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.241 [2024-12-10 04:14:06.325642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.241 [2024-12-10 04:14:06.325671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.241 qpair failed and we were unable to recover it. 00:26:12.241 [2024-12-10 04:14:06.335513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.241 [2024-12-10 04:14:06.335630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.241 [2024-12-10 04:14:06.335663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.241 [2024-12-10 04:14:06.335678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.241 [2024-12-10 04:14:06.335690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.241 [2024-12-10 04:14:06.335718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.241 qpair failed and we were unable to recover it. 00:26:12.241 [2024-12-10 04:14:06.345517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.241 [2024-12-10 04:14:06.345615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.241 [2024-12-10 04:14:06.345643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.241 [2024-12-10 04:14:06.345660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.241 [2024-12-10 04:14:06.345672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.241 [2024-12-10 04:14:06.345701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.241 qpair failed and we were unable to recover it. 00:26:12.241 [2024-12-10 04:14:06.355537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.241 [2024-12-10 04:14:06.355632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.241 [2024-12-10 04:14:06.355658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.241 [2024-12-10 04:14:06.355671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.241 [2024-12-10 04:14:06.355684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.241 [2024-12-10 04:14:06.355712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.241 qpair failed and we were unable to recover it. 00:26:12.241 [2024-12-10 04:14:06.365565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.241 [2024-12-10 04:14:06.365662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.241 [2024-12-10 04:14:06.365688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.241 [2024-12-10 04:14:06.365702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.241 [2024-12-10 04:14:06.365714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.241 [2024-12-10 04:14:06.365743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.241 qpair failed and we were unable to recover it. 00:26:12.241 [2024-12-10 04:14:06.375607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.241 [2024-12-10 04:14:06.375714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.241 [2024-12-10 04:14:06.375739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.241 [2024-12-10 04:14:06.375753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.241 [2024-12-10 04:14:06.375774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.241 [2024-12-10 04:14:06.375803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.241 qpair failed and we were unable to recover it. 00:26:12.241 [2024-12-10 04:14:06.385615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.241 [2024-12-10 04:14:06.385698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.241 [2024-12-10 04:14:06.385724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.241 [2024-12-10 04:14:06.385738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.241 [2024-12-10 04:14:06.385749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.241 [2024-12-10 04:14:06.385777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.241 qpair failed and we were unable to recover it. 00:26:12.241 [2024-12-10 04:14:06.395660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.241 [2024-12-10 04:14:06.395743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.241 [2024-12-10 04:14:06.395769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.241 [2024-12-10 04:14:06.395783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.241 [2024-12-10 04:14:06.395794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.241 [2024-12-10 04:14:06.395822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.241 qpair failed and we were unable to recover it. 00:26:12.241 [2024-12-10 04:14:06.405709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.241 [2024-12-10 04:14:06.405798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.241 [2024-12-10 04:14:06.405823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.241 [2024-12-10 04:14:06.405837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.241 [2024-12-10 04:14:06.405849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.241 [2024-12-10 04:14:06.405877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.241 qpair failed and we were unable to recover it. 00:26:12.241 [2024-12-10 04:14:06.415714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.241 [2024-12-10 04:14:06.415809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.241 [2024-12-10 04:14:06.415834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.241 [2024-12-10 04:14:06.415848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.241 [2024-12-10 04:14:06.415860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.241 [2024-12-10 04:14:06.415889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.241 qpair failed and we were unable to recover it. 00:26:12.241 [2024-12-10 04:14:06.425727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.241 [2024-12-10 04:14:06.425810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.241 [2024-12-10 04:14:06.425835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.241 [2024-12-10 04:14:06.425849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.241 [2024-12-10 04:14:06.425860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.242 [2024-12-10 04:14:06.425888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.242 qpair failed and we were unable to recover it. 00:26:12.242 [2024-12-10 04:14:06.435769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.242 [2024-12-10 04:14:06.435859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.242 [2024-12-10 04:14:06.435884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.242 [2024-12-10 04:14:06.435898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.242 [2024-12-10 04:14:06.435910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.242 [2024-12-10 04:14:06.435937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.242 qpair failed and we were unable to recover it. 00:26:12.242 [2024-12-10 04:14:06.445814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.242 [2024-12-10 04:14:06.445945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.242 [2024-12-10 04:14:06.445970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.242 [2024-12-10 04:14:06.445984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.242 [2024-12-10 04:14:06.445995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.242 [2024-12-10 04:14:06.446023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.242 qpair failed and we were unable to recover it. 00:26:12.242 [2024-12-10 04:14:06.455827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.242 [2024-12-10 04:14:06.455924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.242 [2024-12-10 04:14:06.455949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.242 [2024-12-10 04:14:06.455963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.242 [2024-12-10 04:14:06.455974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.242 [2024-12-10 04:14:06.456002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.242 qpair failed and we were unable to recover it. 00:26:12.242 [2024-12-10 04:14:06.465864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.242 [2024-12-10 04:14:06.465946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.242 [2024-12-10 04:14:06.465977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.242 [2024-12-10 04:14:06.465992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.242 [2024-12-10 04:14:06.466004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.242 [2024-12-10 04:14:06.466032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.242 qpair failed and we were unable to recover it. 00:26:12.242 [2024-12-10 04:14:06.475890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.242 [2024-12-10 04:14:06.475983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.242 [2024-12-10 04:14:06.476008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.242 [2024-12-10 04:14:06.476023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.242 [2024-12-10 04:14:06.476034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.242 [2024-12-10 04:14:06.476062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.242 qpair failed and we were unable to recover it. 00:26:12.242 [2024-12-10 04:14:06.485891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.242 [2024-12-10 04:14:06.485990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.242 [2024-12-10 04:14:06.486015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.242 [2024-12-10 04:14:06.486029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.242 [2024-12-10 04:14:06.486040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.242 [2024-12-10 04:14:06.486068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.242 qpair failed and we were unable to recover it. 00:26:12.242 [2024-12-10 04:14:06.495930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.242 [2024-12-10 04:14:06.496029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.242 [2024-12-10 04:14:06.496054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.242 [2024-12-10 04:14:06.496068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.242 [2024-12-10 04:14:06.496079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.242 [2024-12-10 04:14:06.496107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.242 qpair failed and we were unable to recover it. 00:26:12.242 [2024-12-10 04:14:06.505977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.242 [2024-12-10 04:14:06.506064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.242 [2024-12-10 04:14:06.506092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.242 [2024-12-10 04:14:06.506109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.242 [2024-12-10 04:14:06.506126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.242 [2024-12-10 04:14:06.506156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.242 qpair failed and we were unable to recover it. 00:26:12.242 [2024-12-10 04:14:06.515977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.242 [2024-12-10 04:14:06.516068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.242 [2024-12-10 04:14:06.516093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.242 [2024-12-10 04:14:06.516107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.242 [2024-12-10 04:14:06.516119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.242 [2024-12-10 04:14:06.516147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.242 qpair failed and we were unable to recover it. 00:26:12.242 [2024-12-10 04:14:06.526034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.242 [2024-12-10 04:14:06.526146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.242 [2024-12-10 04:14:06.526172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.242 [2024-12-10 04:14:06.526185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.242 [2024-12-10 04:14:06.526197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.242 [2024-12-10 04:14:06.526225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.242 qpair failed and we were unable to recover it. 00:26:12.242 [2024-12-10 04:14:06.536052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.242 [2024-12-10 04:14:06.536169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.242 [2024-12-10 04:14:06.536194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.242 [2024-12-10 04:14:06.536208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.242 [2024-12-10 04:14:06.536219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.242 [2024-12-10 04:14:06.536247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.242 qpair failed and we were unable to recover it. 00:26:12.242 [2024-12-10 04:14:06.546090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.242 [2024-12-10 04:14:06.546180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.242 [2024-12-10 04:14:06.546205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.242 [2024-12-10 04:14:06.546219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.242 [2024-12-10 04:14:06.546230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.242 [2024-12-10 04:14:06.546258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.242 qpair failed and we were unable to recover it. 00:26:12.242 [2024-12-10 04:14:06.556129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.242 [2024-12-10 04:14:06.556218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.243 [2024-12-10 04:14:06.556243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.243 [2024-12-10 04:14:06.556256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.243 [2024-12-10 04:14:06.556268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.243 [2024-12-10 04:14:06.556296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.243 qpair failed and we were unable to recover it. 00:26:12.243 [2024-12-10 04:14:06.566121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.243 [2024-12-10 04:14:06.566206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.243 [2024-12-10 04:14:06.566232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.243 [2024-12-10 04:14:06.566253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.243 [2024-12-10 04:14:06.566267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.243 [2024-12-10 04:14:06.566296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.243 qpair failed and we were unable to recover it. 00:26:12.243 [2024-12-10 04:14:06.576200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.243 [2024-12-10 04:14:06.576297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.243 [2024-12-10 04:14:06.576324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.243 [2024-12-10 04:14:06.576338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.243 [2024-12-10 04:14:06.576349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.243 [2024-12-10 04:14:06.576378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.243 qpair failed and we were unable to recover it. 00:26:12.243 [2024-12-10 04:14:06.586240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.243 [2024-12-10 04:14:06.586332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.243 [2024-12-10 04:14:06.586361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.243 [2024-12-10 04:14:06.586377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.243 [2024-12-10 04:14:06.586389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.243 [2024-12-10 04:14:06.586418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.243 qpair failed and we were unable to recover it. 00:26:12.243 [2024-12-10 04:14:06.596200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.243 [2024-12-10 04:14:06.596291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.243 [2024-12-10 04:14:06.596323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.243 [2024-12-10 04:14:06.596338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.243 [2024-12-10 04:14:06.596349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.243 [2024-12-10 04:14:06.596378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.243 qpair failed and we were unable to recover it. 00:26:12.243 [2024-12-10 04:14:06.606229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.243 [2024-12-10 04:14:06.606315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.243 [2024-12-10 04:14:06.606340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.243 [2024-12-10 04:14:06.606354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.243 [2024-12-10 04:14:06.606366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.243 [2024-12-10 04:14:06.606394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.243 qpair failed and we were unable to recover it. 00:26:12.243 [2024-12-10 04:14:06.616322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.243 [2024-12-10 04:14:06.616422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.243 [2024-12-10 04:14:06.616448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.243 [2024-12-10 04:14:06.616463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.243 [2024-12-10 04:14:06.616474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.243 [2024-12-10 04:14:06.616503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.243 qpair failed and we were unable to recover it. 00:26:12.503 [2024-12-10 04:14:06.626322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.503 [2024-12-10 04:14:06.626411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.503 [2024-12-10 04:14:06.626437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.503 [2024-12-10 04:14:06.626451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.503 [2024-12-10 04:14:06.626463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.503 [2024-12-10 04:14:06.626491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.503 qpair failed and we were unable to recover it. 00:26:12.503 [2024-12-10 04:14:06.636322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.503 [2024-12-10 04:14:06.636403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.503 [2024-12-10 04:14:06.636429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.503 [2024-12-10 04:14:06.636443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.503 [2024-12-10 04:14:06.636461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.503 [2024-12-10 04:14:06.636490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.503 qpair failed and we were unable to recover it. 00:26:12.503 [2024-12-10 04:14:06.646349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.503 [2024-12-10 04:14:06.646436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.503 [2024-12-10 04:14:06.646461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.503 [2024-12-10 04:14:06.646475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.503 [2024-12-10 04:14:06.646486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.503 [2024-12-10 04:14:06.646514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.503 qpair failed and we were unable to recover it. 00:26:12.503 [2024-12-10 04:14:06.656397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.503 [2024-12-10 04:14:06.656497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.503 [2024-12-10 04:14:06.656522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.503 [2024-12-10 04:14:06.656536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.503 [2024-12-10 04:14:06.656559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.503 [2024-12-10 04:14:06.656588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.503 qpair failed and we were unable to recover it. 00:26:12.503 [2024-12-10 04:14:06.666433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.503 [2024-12-10 04:14:06.666558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.504 [2024-12-10 04:14:06.666590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.504 [2024-12-10 04:14:06.666604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.504 [2024-12-10 04:14:06.666616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.504 [2024-12-10 04:14:06.666644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.504 qpair failed and we were unable to recover it. 00:26:12.504 [2024-12-10 04:14:06.676449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.504 [2024-12-10 04:14:06.676542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.504 [2024-12-10 04:14:06.676577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.504 [2024-12-10 04:14:06.676592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.504 [2024-12-10 04:14:06.676604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.504 [2024-12-10 04:14:06.676633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.504 qpair failed and we were unable to recover it. 00:26:12.504 [2024-12-10 04:14:06.686455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.504 [2024-12-10 04:14:06.686541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.504 [2024-12-10 04:14:06.686574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.504 [2024-12-10 04:14:06.686587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.504 [2024-12-10 04:14:06.686599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.504 [2024-12-10 04:14:06.686628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.504 qpair failed and we were unable to recover it. 00:26:12.504 [2024-12-10 04:14:06.696531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.504 [2024-12-10 04:14:06.696641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.504 [2024-12-10 04:14:06.696665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.504 [2024-12-10 04:14:06.696680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.504 [2024-12-10 04:14:06.696692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.504 [2024-12-10 04:14:06.696719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.504 qpair failed and we were unable to recover it. 00:26:12.504 [2024-12-10 04:14:06.706522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.504 [2024-12-10 04:14:06.706618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.504 [2024-12-10 04:14:06.706644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.504 [2024-12-10 04:14:06.706658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.504 [2024-12-10 04:14:06.706670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.504 [2024-12-10 04:14:06.706698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.504 qpair failed and we were unable to recover it. 00:26:12.504 [2024-12-10 04:14:06.716580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.504 [2024-12-10 04:14:06.716667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.504 [2024-12-10 04:14:06.716692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.504 [2024-12-10 04:14:06.716706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.504 [2024-12-10 04:14:06.716718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.504 [2024-12-10 04:14:06.716746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.504 qpair failed and we were unable to recover it. 00:26:12.504 [2024-12-10 04:14:06.726578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.504 [2024-12-10 04:14:06.726662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.504 [2024-12-10 04:14:06.726692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.504 [2024-12-10 04:14:06.726707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.504 [2024-12-10 04:14:06.726719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.504 [2024-12-10 04:14:06.726747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.504 qpair failed and we were unable to recover it. 00:26:12.504 [2024-12-10 04:14:06.736635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.504 [2024-12-10 04:14:06.736736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.504 [2024-12-10 04:14:06.736764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.504 [2024-12-10 04:14:06.736781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.504 [2024-12-10 04:14:06.736793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.504 [2024-12-10 04:14:06.736822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.504 qpair failed and we were unable to recover it. 00:26:12.504 [2024-12-10 04:14:06.746639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.504 [2024-12-10 04:14:06.746728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.504 [2024-12-10 04:14:06.746754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.504 [2024-12-10 04:14:06.746768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.504 [2024-12-10 04:14:06.746779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.504 [2024-12-10 04:14:06.746808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.504 qpair failed and we were unable to recover it. 00:26:12.504 [2024-12-10 04:14:06.756674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.504 [2024-12-10 04:14:06.756759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.504 [2024-12-10 04:14:06.756784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.504 [2024-12-10 04:14:06.756797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.504 [2024-12-10 04:14:06.756809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.504 [2024-12-10 04:14:06.756841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.504 qpair failed and we were unable to recover it. 00:26:12.504 [2024-12-10 04:14:06.766738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.504 [2024-12-10 04:14:06.766824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.504 [2024-12-10 04:14:06.766849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.504 [2024-12-10 04:14:06.766868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.504 [2024-12-10 04:14:06.766881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.504 [2024-12-10 04:14:06.766910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.504 qpair failed and we were unable to recover it. 00:26:12.504 [2024-12-10 04:14:06.776738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.504 [2024-12-10 04:14:06.776859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.504 [2024-12-10 04:14:06.776884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.504 [2024-12-10 04:14:06.776899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.504 [2024-12-10 04:14:06.776910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.504 [2024-12-10 04:14:06.776939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.504 qpair failed and we were unable to recover it. 00:26:12.504 [2024-12-10 04:14:06.786780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.504 [2024-12-10 04:14:06.786894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.504 [2024-12-10 04:14:06.786918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.504 [2024-12-10 04:14:06.786932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.504 [2024-12-10 04:14:06.786943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.504 [2024-12-10 04:14:06.786971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.504 qpair failed and we were unable to recover it. 00:26:12.504 [2024-12-10 04:14:06.796798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.504 [2024-12-10 04:14:06.796912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.505 [2024-12-10 04:14:06.796937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.505 [2024-12-10 04:14:06.796951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.505 [2024-12-10 04:14:06.796963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.505 [2024-12-10 04:14:06.796991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.505 qpair failed and we were unable to recover it. 00:26:12.505 [2024-12-10 04:14:06.806828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.505 [2024-12-10 04:14:06.806913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.505 [2024-12-10 04:14:06.806939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.505 [2024-12-10 04:14:06.806953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.505 [2024-12-10 04:14:06.806965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.505 [2024-12-10 04:14:06.806993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.505 qpair failed and we were unable to recover it. 00:26:12.505 [2024-12-10 04:14:06.816911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.505 [2024-12-10 04:14:06.817070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.505 [2024-12-10 04:14:06.817095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.505 [2024-12-10 04:14:06.817109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.505 [2024-12-10 04:14:06.817121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.505 [2024-12-10 04:14:06.817149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.505 qpair failed and we were unable to recover it. 00:26:12.505 [2024-12-10 04:14:06.826925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.505 [2024-12-10 04:14:06.827014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.505 [2024-12-10 04:14:06.827039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.505 [2024-12-10 04:14:06.827053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.505 [2024-12-10 04:14:06.827065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.505 [2024-12-10 04:14:06.827093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.505 qpair failed and we were unable to recover it. 00:26:12.505 [2024-12-10 04:14:06.836884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.505 [2024-12-10 04:14:06.836981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.505 [2024-12-10 04:14:06.837004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.505 [2024-12-10 04:14:06.837017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.505 [2024-12-10 04:14:06.837029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.505 [2024-12-10 04:14:06.837057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.505 qpair failed and we were unable to recover it. 00:26:12.505 [2024-12-10 04:14:06.847010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.505 [2024-12-10 04:14:06.847111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.505 [2024-12-10 04:14:06.847137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.505 [2024-12-10 04:14:06.847151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.505 [2024-12-10 04:14:06.847163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.505 [2024-12-10 04:14:06.847191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.505 qpair failed and we were unable to recover it. 00:26:12.505 [2024-12-10 04:14:06.856999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.505 [2024-12-10 04:14:06.857106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.505 [2024-12-10 04:14:06.857140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.505 [2024-12-10 04:14:06.857157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.505 [2024-12-10 04:14:06.857169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.505 [2024-12-10 04:14:06.857198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.505 qpair failed and we were unable to recover it. 00:26:12.505 [2024-12-10 04:14:06.866972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.505 [2024-12-10 04:14:06.867054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.505 [2024-12-10 04:14:06.867078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.505 [2024-12-10 04:14:06.867092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.505 [2024-12-10 04:14:06.867104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.505 [2024-12-10 04:14:06.867132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.505 qpair failed and we were unable to recover it. 00:26:12.505 [2024-12-10 04:14:06.877120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.505 [2024-12-10 04:14:06.877256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.505 [2024-12-10 04:14:06.877281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.505 [2024-12-10 04:14:06.877295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.505 [2024-12-10 04:14:06.877307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.505 [2024-12-10 04:14:06.877335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.505 qpair failed and we were unable to recover it. 00:26:12.764 [2024-12-10 04:14:06.887029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.764 [2024-12-10 04:14:06.887117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.764 [2024-12-10 04:14:06.887143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.764 [2024-12-10 04:14:06.887158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.764 [2024-12-10 04:14:06.887170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.764 [2024-12-10 04:14:06.887198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.764 qpair failed and we were unable to recover it. 00:26:12.764 [2024-12-10 04:14:06.897126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.764 [2024-12-10 04:14:06.897260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.765 [2024-12-10 04:14:06.897286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.765 [2024-12-10 04:14:06.897306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.765 [2024-12-10 04:14:06.897318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.765 [2024-12-10 04:14:06.897346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.765 qpair failed and we were unable to recover it. 00:26:12.765 [2024-12-10 04:14:06.907081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.765 [2024-12-10 04:14:06.907169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.765 [2024-12-10 04:14:06.907194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.765 [2024-12-10 04:14:06.907208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.765 [2024-12-10 04:14:06.907219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.765 [2024-12-10 04:14:06.907247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.765 qpair failed and we were unable to recover it. 00:26:12.765 [2024-12-10 04:14:06.917111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.765 [2024-12-10 04:14:06.917198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.765 [2024-12-10 04:14:06.917223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.765 [2024-12-10 04:14:06.917237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.765 [2024-12-10 04:14:06.917249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.765 [2024-12-10 04:14:06.917277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.765 qpair failed and we were unable to recover it. 00:26:12.765 [2024-12-10 04:14:06.927128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.765 [2024-12-10 04:14:06.927216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.765 [2024-12-10 04:14:06.927241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.765 [2024-12-10 04:14:06.927256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.765 [2024-12-10 04:14:06.927267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.765 [2024-12-10 04:14:06.927295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.765 qpair failed and we were unable to recover it. 00:26:12.765 [2024-12-10 04:14:06.937173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.765 [2024-12-10 04:14:06.937267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.765 [2024-12-10 04:14:06.937291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.765 [2024-12-10 04:14:06.937305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.765 [2024-12-10 04:14:06.937317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.765 [2024-12-10 04:14:06.937344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.765 qpair failed and we were unable to recover it. 00:26:12.765 [2024-12-10 04:14:06.947207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.765 [2024-12-10 04:14:06.947318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.765 [2024-12-10 04:14:06.947343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.765 [2024-12-10 04:14:06.947357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.765 [2024-12-10 04:14:06.947369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.765 [2024-12-10 04:14:06.947397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.765 qpair failed and we were unable to recover it. 00:26:12.765 [2024-12-10 04:14:06.957217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.765 [2024-12-10 04:14:06.957299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.765 [2024-12-10 04:14:06.957324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.765 [2024-12-10 04:14:06.957338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.765 [2024-12-10 04:14:06.957350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.765 [2024-12-10 04:14:06.957378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.765 qpair failed and we were unable to recover it. 00:26:12.765 [2024-12-10 04:14:06.967259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.765 [2024-12-10 04:14:06.967346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.765 [2024-12-10 04:14:06.967371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.765 [2024-12-10 04:14:06.967385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.765 [2024-12-10 04:14:06.967397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.765 [2024-12-10 04:14:06.967425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.765 qpair failed and we were unable to recover it. 00:26:12.765 [2024-12-10 04:14:06.977319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.765 [2024-12-10 04:14:06.977419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.765 [2024-12-10 04:14:06.977447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.765 [2024-12-10 04:14:06.977462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.765 [2024-12-10 04:14:06.977474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.765 [2024-12-10 04:14:06.977503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.765 qpair failed and we were unable to recover it. 00:26:12.765 [2024-12-10 04:14:06.987371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.765 [2024-12-10 04:14:06.987469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.765 [2024-12-10 04:14:06.987499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.765 [2024-12-10 04:14:06.987514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.765 [2024-12-10 04:14:06.987526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.765 [2024-12-10 04:14:06.987560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.765 qpair failed and we were unable to recover it. 00:26:12.765 [2024-12-10 04:14:06.997621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.765 [2024-12-10 04:14:06.997720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.765 [2024-12-10 04:14:06.997746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.765 [2024-12-10 04:14:06.997760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.765 [2024-12-10 04:14:06.997772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.765 [2024-12-10 04:14:06.997799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.765 qpair failed and we were unable to recover it. 00:26:12.765 [2024-12-10 04:14:07.007407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.765 [2024-12-10 04:14:07.007494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.765 [2024-12-10 04:14:07.007520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.765 [2024-12-10 04:14:07.007534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.765 [2024-12-10 04:14:07.007552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.765 [2024-12-10 04:14:07.007581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.765 qpair failed and we were unable to recover it. 00:26:12.765 [2024-12-10 04:14:07.017476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.765 [2024-12-10 04:14:07.017587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.765 [2024-12-10 04:14:07.017612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.765 [2024-12-10 04:14:07.017627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.765 [2024-12-10 04:14:07.017638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.765 [2024-12-10 04:14:07.017667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.765 qpair failed and we were unable to recover it. 00:26:12.765 [2024-12-10 04:14:07.027471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.765 [2024-12-10 04:14:07.027570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.765 [2024-12-10 04:14:07.027595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.765 [2024-12-10 04:14:07.027618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.766 [2024-12-10 04:14:07.027631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.766 [2024-12-10 04:14:07.027659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.766 qpair failed and we were unable to recover it. 00:26:12.766 [2024-12-10 04:14:07.037517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.766 [2024-12-10 04:14:07.037618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.766 [2024-12-10 04:14:07.037642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.766 [2024-12-10 04:14:07.037655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.766 [2024-12-10 04:14:07.037667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.766 [2024-12-10 04:14:07.037695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.766 qpair failed and we were unable to recover it. 00:26:12.766 [2024-12-10 04:14:07.047504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.766 [2024-12-10 04:14:07.047596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.766 [2024-12-10 04:14:07.047620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.766 [2024-12-10 04:14:07.047634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.766 [2024-12-10 04:14:07.047645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.766 [2024-12-10 04:14:07.047674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.766 qpair failed and we were unable to recover it. 00:26:12.766 [2024-12-10 04:14:07.057522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.766 [2024-12-10 04:14:07.057628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.766 [2024-12-10 04:14:07.057652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.766 [2024-12-10 04:14:07.057667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.766 [2024-12-10 04:14:07.057678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.766 [2024-12-10 04:14:07.057706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.766 qpair failed and we were unable to recover it. 00:26:12.766 [2024-12-10 04:14:07.067534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.766 [2024-12-10 04:14:07.067671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.766 [2024-12-10 04:14:07.067700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.766 [2024-12-10 04:14:07.067716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.766 [2024-12-10 04:14:07.067728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.766 [2024-12-10 04:14:07.067757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.766 qpair failed and we were unable to recover it. 00:26:12.766 [2024-12-10 04:14:07.077561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.766 [2024-12-10 04:14:07.077642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.766 [2024-12-10 04:14:07.077668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.766 [2024-12-10 04:14:07.077682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.766 [2024-12-10 04:14:07.077694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.766 [2024-12-10 04:14:07.077722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.766 qpair failed and we were unable to recover it. 00:26:12.766 [2024-12-10 04:14:07.087583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.766 [2024-12-10 04:14:07.087666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.766 [2024-12-10 04:14:07.087691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.766 [2024-12-10 04:14:07.087705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.766 [2024-12-10 04:14:07.087717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.766 [2024-12-10 04:14:07.087745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.766 qpair failed and we were unable to recover it. 00:26:12.766 [2024-12-10 04:14:07.097657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.766 [2024-12-10 04:14:07.097784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.766 [2024-12-10 04:14:07.097809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.766 [2024-12-10 04:14:07.097823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.766 [2024-12-10 04:14:07.097834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.766 [2024-12-10 04:14:07.097862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.766 qpair failed and we were unable to recover it. 00:26:12.766 [2024-12-10 04:14:07.107642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.766 [2024-12-10 04:14:07.107733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.766 [2024-12-10 04:14:07.107758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.766 [2024-12-10 04:14:07.107771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.766 [2024-12-10 04:14:07.107783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.766 [2024-12-10 04:14:07.107811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.766 qpair failed and we were unable to recover it. 00:26:12.766 [2024-12-10 04:14:07.117683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.766 [2024-12-10 04:14:07.117776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.766 [2024-12-10 04:14:07.117806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.766 [2024-12-10 04:14:07.117820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.766 [2024-12-10 04:14:07.117832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.766 [2024-12-10 04:14:07.117860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.766 qpair failed and we were unable to recover it. 00:26:12.766 [2024-12-10 04:14:07.127682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.766 [2024-12-10 04:14:07.127767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.766 [2024-12-10 04:14:07.127792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.766 [2024-12-10 04:14:07.127806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.766 [2024-12-10 04:14:07.127817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.766 [2024-12-10 04:14:07.127845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.766 qpair failed and we were unable to recover it. 00:26:12.766 [2024-12-10 04:14:07.137771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.766 [2024-12-10 04:14:07.137894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.766 [2024-12-10 04:14:07.137919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.766 [2024-12-10 04:14:07.137932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.766 [2024-12-10 04:14:07.137944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:12.766 [2024-12-10 04:14:07.137972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.766 qpair failed and we were unable to recover it. 00:26:13.026 [2024-12-10 04:14:07.147827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.026 [2024-12-10 04:14:07.147914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.026 [2024-12-10 04:14:07.147940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.026 [2024-12-10 04:14:07.147954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.026 [2024-12-10 04:14:07.147966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.026 [2024-12-10 04:14:07.147994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.026 qpair failed and we were unable to recover it. 00:26:13.026 [2024-12-10 04:14:07.157799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.026 [2024-12-10 04:14:07.157928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.026 [2024-12-10 04:14:07.157955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.026 [2024-12-10 04:14:07.157975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.026 [2024-12-10 04:14:07.157987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.026 [2024-12-10 04:14:07.158015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.026 qpair failed and we were unable to recover it. 00:26:13.026 [2024-12-10 04:14:07.167847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.026 [2024-12-10 04:14:07.167929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.026 [2024-12-10 04:14:07.167953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.026 [2024-12-10 04:14:07.167967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.026 [2024-12-10 04:14:07.167979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.026 [2024-12-10 04:14:07.168007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.026 qpair failed and we were unable to recover it. 00:26:13.026 [2024-12-10 04:14:07.177860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.026 [2024-12-10 04:14:07.177952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.026 [2024-12-10 04:14:07.177977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.026 [2024-12-10 04:14:07.177991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.026 [2024-12-10 04:14:07.178003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.026 [2024-12-10 04:14:07.178030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.026 qpair failed and we were unable to recover it. 00:26:13.026 [2024-12-10 04:14:07.187869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.026 [2024-12-10 04:14:07.187954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.026 [2024-12-10 04:14:07.187980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.026 [2024-12-10 04:14:07.187993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.026 [2024-12-10 04:14:07.188005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.026 [2024-12-10 04:14:07.188033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.026 qpair failed and we were unable to recover it. 00:26:13.026 [2024-12-10 04:14:07.197920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.026 [2024-12-10 04:14:07.198003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.026 [2024-12-10 04:14:07.198026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.026 [2024-12-10 04:14:07.198040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.026 [2024-12-10 04:14:07.198051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.026 [2024-12-10 04:14:07.198079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.026 qpair failed and we were unable to recover it. 00:26:13.026 [2024-12-10 04:14:07.207925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.026 [2024-12-10 04:14:07.208009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.026 [2024-12-10 04:14:07.208033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.026 [2024-12-10 04:14:07.208047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.026 [2024-12-10 04:14:07.208059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.026 [2024-12-10 04:14:07.208087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.026 qpair failed and we were unable to recover it. 00:26:13.026 [2024-12-10 04:14:07.218022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.026 [2024-12-10 04:14:07.218118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.026 [2024-12-10 04:14:07.218143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.026 [2024-12-10 04:14:07.218156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.026 [2024-12-10 04:14:07.218169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.026 [2024-12-10 04:14:07.218196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.026 qpair failed and we were unable to recover it. 00:26:13.026 [2024-12-10 04:14:07.227979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.026 [2024-12-10 04:14:07.228066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.026 [2024-12-10 04:14:07.228091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.026 [2024-12-10 04:14:07.228105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.026 [2024-12-10 04:14:07.228116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.026 [2024-12-10 04:14:07.228145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.026 qpair failed and we were unable to recover it. 00:26:13.026 [2024-12-10 04:14:07.238040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.026 [2024-12-10 04:14:07.238124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.026 [2024-12-10 04:14:07.238148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.026 [2024-12-10 04:14:07.238162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.026 [2024-12-10 04:14:07.238174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.026 [2024-12-10 04:14:07.238201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.026 qpair failed and we were unable to recover it. 00:26:13.026 [2024-12-10 04:14:07.248030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.026 [2024-12-10 04:14:07.248118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.026 [2024-12-10 04:14:07.248143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.026 [2024-12-10 04:14:07.248157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.026 [2024-12-10 04:14:07.248168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.026 [2024-12-10 04:14:07.248196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.026 qpair failed and we were unable to recover it. 00:26:13.026 [2024-12-10 04:14:07.258101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.026 [2024-12-10 04:14:07.258198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.026 [2024-12-10 04:14:07.258222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.026 [2024-12-10 04:14:07.258236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.026 [2024-12-10 04:14:07.258247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.026 [2024-12-10 04:14:07.258275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.026 qpair failed and we were unable to recover it. 00:26:13.027 [2024-12-10 04:14:07.268111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.027 [2024-12-10 04:14:07.268195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.027 [2024-12-10 04:14:07.268220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.027 [2024-12-10 04:14:07.268234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.027 [2024-12-10 04:14:07.268246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.027 [2024-12-10 04:14:07.268273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.027 qpair failed and we were unable to recover it. 00:26:13.027 [2024-12-10 04:14:07.278159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.027 [2024-12-10 04:14:07.278244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.027 [2024-12-10 04:14:07.278269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.027 [2024-12-10 04:14:07.278283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.027 [2024-12-10 04:14:07.278295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.027 [2024-12-10 04:14:07.278323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.027 qpair failed and we were unable to recover it. 00:26:13.027 [2024-12-10 04:14:07.288178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.027 [2024-12-10 04:14:07.288265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.027 [2024-12-10 04:14:07.288288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.027 [2024-12-10 04:14:07.288307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.027 [2024-12-10 04:14:07.288319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.027 [2024-12-10 04:14:07.288347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.027 qpair failed and we were unable to recover it. 00:26:13.027 [2024-12-10 04:14:07.298284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.027 [2024-12-10 04:14:07.298401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.027 [2024-12-10 04:14:07.298426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.027 [2024-12-10 04:14:07.298440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.027 [2024-12-10 04:14:07.298452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.027 [2024-12-10 04:14:07.298479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.027 qpair failed and we were unable to recover it. 00:26:13.027 [2024-12-10 04:14:07.308311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.027 [2024-12-10 04:14:07.308442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.027 [2024-12-10 04:14:07.308467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.027 [2024-12-10 04:14:07.308481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.027 [2024-12-10 04:14:07.308492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.027 [2024-12-10 04:14:07.308520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.027 qpair failed and we were unable to recover it. 00:26:13.027 [2024-12-10 04:14:07.318257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.027 [2024-12-10 04:14:07.318341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.027 [2024-12-10 04:14:07.318367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.027 [2024-12-10 04:14:07.318381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.027 [2024-12-10 04:14:07.318393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.027 [2024-12-10 04:14:07.318422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.027 qpair failed and we were unable to recover it. 00:26:13.027 [2024-12-10 04:14:07.328286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.027 [2024-12-10 04:14:07.328380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.027 [2024-12-10 04:14:07.328405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.027 [2024-12-10 04:14:07.328419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.027 [2024-12-10 04:14:07.328431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.027 [2024-12-10 04:14:07.328458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.027 qpair failed and we were unable to recover it. 00:26:13.027 [2024-12-10 04:14:07.338327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.027 [2024-12-10 04:14:07.338449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.027 [2024-12-10 04:14:07.338475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.027 [2024-12-10 04:14:07.338489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.027 [2024-12-10 04:14:07.338500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.027 [2024-12-10 04:14:07.338528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.027 qpair failed and we were unable to recover it. 00:26:13.027 [2024-12-10 04:14:07.348355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.027 [2024-12-10 04:14:07.348444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.027 [2024-12-10 04:14:07.348469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.027 [2024-12-10 04:14:07.348483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.027 [2024-12-10 04:14:07.348495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.027 [2024-12-10 04:14:07.348522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.027 qpair failed and we were unable to recover it. 00:26:13.027 [2024-12-10 04:14:07.358409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.027 [2024-12-10 04:14:07.358497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.027 [2024-12-10 04:14:07.358522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.027 [2024-12-10 04:14:07.358535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.027 [2024-12-10 04:14:07.358558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.027 [2024-12-10 04:14:07.358588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.027 qpair failed and we were unable to recover it. 00:26:13.027 [2024-12-10 04:14:07.368379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.027 [2024-12-10 04:14:07.368464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.027 [2024-12-10 04:14:07.368489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.027 [2024-12-10 04:14:07.368502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.027 [2024-12-10 04:14:07.368514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.027 [2024-12-10 04:14:07.368541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.027 qpair failed and we were unable to recover it. 00:26:13.027 [2024-12-10 04:14:07.378417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.027 [2024-12-10 04:14:07.378517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.027 [2024-12-10 04:14:07.378559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.027 [2024-12-10 04:14:07.378575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.027 [2024-12-10 04:14:07.378587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.027 [2024-12-10 04:14:07.378615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.027 qpair failed and we were unable to recover it. 00:26:13.027 [2024-12-10 04:14:07.388478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.027 [2024-12-10 04:14:07.388578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.027 [2024-12-10 04:14:07.388603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.027 [2024-12-10 04:14:07.388617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.027 [2024-12-10 04:14:07.388628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.027 [2024-12-10 04:14:07.388656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.027 qpair failed and we were unable to recover it. 00:26:13.027 [2024-12-10 04:14:07.398469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.027 [2024-12-10 04:14:07.398570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.027 [2024-12-10 04:14:07.398595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.028 [2024-12-10 04:14:07.398609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.028 [2024-12-10 04:14:07.398621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.028 [2024-12-10 04:14:07.398648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.028 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-10 04:14:07.408494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.287 [2024-12-10 04:14:07.408599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.287 [2024-12-10 04:14:07.408626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.287 [2024-12-10 04:14:07.408641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.287 [2024-12-10 04:14:07.408653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.287 [2024-12-10 04:14:07.408682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-10 04:14:07.418539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.287 [2024-12-10 04:14:07.418647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.287 [2024-12-10 04:14:07.418673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.287 [2024-12-10 04:14:07.418694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.287 [2024-12-10 04:14:07.418707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.287 [2024-12-10 04:14:07.418736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-10 04:14:07.428556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.428643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.428668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.428682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.428694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.428722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.438603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.438697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.438722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.438736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.438747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.438775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.448645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.448752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.448777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.448791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.448803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.448837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.458661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.458755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.458781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.458795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.458806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.458839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.468735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.468845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.468869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.468883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.468894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.468922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.478808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.478945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.478970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.478984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.478996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.479024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.488747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.488833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.488857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.488871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.488883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.488911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.498845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.498965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.498990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.499003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.499015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.499043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.508779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.508872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.508897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.508910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.508922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.508950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.518872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.518961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.518986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.519000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.519011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.519039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.528854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.528937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.528962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.528975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.528987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.529014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.538902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.539000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.539024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.539037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.539048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.539076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.548913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.549000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.549025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.549044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.549057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.549085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.558924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.559014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.559038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.559052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.559064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.559091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.568981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.569068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.569093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.569106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.569118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.569145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.579033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.579127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.579152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.579166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.579177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.579205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.589021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.589105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.589133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.589147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.589159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.589192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.599047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.599136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.599161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.599175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.599187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.599215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.609135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.609251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.609279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.609296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.609308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.609337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.619136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.619253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.619278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.619292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.619303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.619332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.629117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.629216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.629241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.629255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.629267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.629295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.639152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.639246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.639271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.639285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.639296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.639324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.649232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.649342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.649367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.649381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.649393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.649420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-10 04:14:07.659233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.288 [2024-12-10 04:14:07.659331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.288 [2024-12-10 04:14:07.659356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.288 [2024-12-10 04:14:07.659370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.288 [2024-12-10 04:14:07.659382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.288 [2024-12-10 04:14:07.659410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.549 [2024-12-10 04:14:07.669310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.549 [2024-12-10 04:14:07.669398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.549 [2024-12-10 04:14:07.669425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.549 [2024-12-10 04:14:07.669439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.549 [2024-12-10 04:14:07.669451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.549 [2024-12-10 04:14:07.669480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.550 qpair failed and we were unable to recover it. 00:26:13.550 [2024-12-10 04:14:07.679293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.550 [2024-12-10 04:14:07.679378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.550 [2024-12-10 04:14:07.679404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.550 [2024-12-10 04:14:07.679426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.550 [2024-12-10 04:14:07.679438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.550 [2024-12-10 04:14:07.679467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.550 qpair failed and we were unable to recover it. 00:26:13.550 [2024-12-10 04:14:07.689303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.550 [2024-12-10 04:14:07.689386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.550 [2024-12-10 04:14:07.689411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.550 [2024-12-10 04:14:07.689426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.550 [2024-12-10 04:14:07.689437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.550 [2024-12-10 04:14:07.689465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.550 qpair failed and we were unable to recover it. 00:26:13.550 [2024-12-10 04:14:07.699347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.550 [2024-12-10 04:14:07.699442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.550 [2024-12-10 04:14:07.699467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.550 [2024-12-10 04:14:07.699480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.550 [2024-12-10 04:14:07.699492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.550 [2024-12-10 04:14:07.699520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.550 qpair failed and we were unable to recover it. 00:26:13.550 [2024-12-10 04:14:07.709356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.550 [2024-12-10 04:14:07.709443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.550 [2024-12-10 04:14:07.709468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.550 [2024-12-10 04:14:07.709482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.550 [2024-12-10 04:14:07.709494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.550 [2024-12-10 04:14:07.709522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.550 qpair failed and we were unable to recover it. 00:26:13.550 [2024-12-10 04:14:07.719396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.550 [2024-12-10 04:14:07.719479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.550 [2024-12-10 04:14:07.719505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.550 [2024-12-10 04:14:07.719519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.550 [2024-12-10 04:14:07.719530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.550 [2024-12-10 04:14:07.719572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.550 qpair failed and we were unable to recover it. 00:26:13.550 [2024-12-10 04:14:07.729435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.550 [2024-12-10 04:14:07.729519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.550 [2024-12-10 04:14:07.729550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.550 [2024-12-10 04:14:07.729566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.550 [2024-12-10 04:14:07.729578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.550 [2024-12-10 04:14:07.729606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.550 qpair failed and we were unable to recover it. 00:26:13.550 [2024-12-10 04:14:07.739460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.550 [2024-12-10 04:14:07.739564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.550 [2024-12-10 04:14:07.739589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.550 [2024-12-10 04:14:07.739604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.550 [2024-12-10 04:14:07.739616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.550 [2024-12-10 04:14:07.739644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.550 qpair failed and we were unable to recover it. 00:26:13.550 [2024-12-10 04:14:07.749472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.550 [2024-12-10 04:14:07.749562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.550 [2024-12-10 04:14:07.749588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.550 [2024-12-10 04:14:07.749602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.550 [2024-12-10 04:14:07.749614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.550 [2024-12-10 04:14:07.749642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.550 qpair failed and we were unable to recover it. 00:26:13.550 [2024-12-10 04:14:07.759494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.550 [2024-12-10 04:14:07.759606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.550 [2024-12-10 04:14:07.759632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.550 [2024-12-10 04:14:07.759645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.550 [2024-12-10 04:14:07.759657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.550 [2024-12-10 04:14:07.759686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.550 qpair failed and we were unable to recover it. 00:26:13.550 [2024-12-10 04:14:07.769532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.550 [2024-12-10 04:14:07.769648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.550 [2024-12-10 04:14:07.769677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.550 [2024-12-10 04:14:07.769692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.550 [2024-12-10 04:14:07.769704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.550 [2024-12-10 04:14:07.769733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.550 qpair failed and we were unable to recover it. 00:26:13.550 [2024-12-10 04:14:07.779605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.550 [2024-12-10 04:14:07.779707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.550 [2024-12-10 04:14:07.779736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.550 [2024-12-10 04:14:07.779753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.550 [2024-12-10 04:14:07.779765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.550 [2024-12-10 04:14:07.779795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.550 qpair failed and we were unable to recover it. 00:26:13.550 [2024-12-10 04:14:07.789587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.550 [2024-12-10 04:14:07.789672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.550 [2024-12-10 04:14:07.789696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.550 [2024-12-10 04:14:07.789710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.550 [2024-12-10 04:14:07.789722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.550 [2024-12-10 04:14:07.789750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.550 qpair failed and we were unable to recover it. 00:26:13.550 [2024-12-10 04:14:07.799621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.550 [2024-12-10 04:14:07.799708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.550 [2024-12-10 04:14:07.799733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.550 [2024-12-10 04:14:07.799747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.550 [2024-12-10 04:14:07.799759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.550 [2024-12-10 04:14:07.799787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.550 qpair failed and we were unable to recover it. 00:26:13.550 [2024-12-10 04:14:07.809668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.550 [2024-12-10 04:14:07.809797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.551 [2024-12-10 04:14:07.809823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.551 [2024-12-10 04:14:07.809842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.551 [2024-12-10 04:14:07.809854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.551 [2024-12-10 04:14:07.809882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.551 qpair failed and we were unable to recover it. 00:26:13.551 [2024-12-10 04:14:07.819706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.551 [2024-12-10 04:14:07.819801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.551 [2024-12-10 04:14:07.819826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.551 [2024-12-10 04:14:07.819840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.551 [2024-12-10 04:14:07.819852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.551 [2024-12-10 04:14:07.819880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.551 qpair failed and we were unable to recover it. 00:26:13.551 [2024-12-10 04:14:07.829687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.551 [2024-12-10 04:14:07.829783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.551 [2024-12-10 04:14:07.829809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.551 [2024-12-10 04:14:07.829823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.551 [2024-12-10 04:14:07.829835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.551 [2024-12-10 04:14:07.829863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.551 qpair failed and we were unable to recover it. 00:26:13.551 [2024-12-10 04:14:07.839744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.551 [2024-12-10 04:14:07.839826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.551 [2024-12-10 04:14:07.839850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.551 [2024-12-10 04:14:07.839864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.551 [2024-12-10 04:14:07.839875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.551 [2024-12-10 04:14:07.839904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.551 qpair failed and we were unable to recover it. 00:26:13.551 [2024-12-10 04:14:07.849742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.551 [2024-12-10 04:14:07.849833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.551 [2024-12-10 04:14:07.849858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.551 [2024-12-10 04:14:07.849871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.551 [2024-12-10 04:14:07.849883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.551 [2024-12-10 04:14:07.849916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.551 qpair failed and we were unable to recover it. 00:26:13.551 [2024-12-10 04:14:07.859849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.551 [2024-12-10 04:14:07.859948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.551 [2024-12-10 04:14:07.859973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.551 [2024-12-10 04:14:07.859986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.551 [2024-12-10 04:14:07.859998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.551 [2024-12-10 04:14:07.860026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.551 qpair failed and we were unable to recover it. 00:26:13.551 [2024-12-10 04:14:07.869914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.551 [2024-12-10 04:14:07.870001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.551 [2024-12-10 04:14:07.870026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.551 [2024-12-10 04:14:07.870039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.551 [2024-12-10 04:14:07.870051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.551 [2024-12-10 04:14:07.870079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.551 qpair failed and we were unable to recover it. 00:26:13.551 [2024-12-10 04:14:07.879943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.551 [2024-12-10 04:14:07.880034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.551 [2024-12-10 04:14:07.880062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.551 [2024-12-10 04:14:07.880078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.551 [2024-12-10 04:14:07.880090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.551 [2024-12-10 04:14:07.880119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.551 qpair failed and we were unable to recover it. 00:26:13.551 [2024-12-10 04:14:07.889897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.551 [2024-12-10 04:14:07.889985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.551 [2024-12-10 04:14:07.890011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.551 [2024-12-10 04:14:07.890025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.551 [2024-12-10 04:14:07.890036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.551 [2024-12-10 04:14:07.890065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.551 qpair failed and we were unable to recover it. 00:26:13.551 [2024-12-10 04:14:07.899945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.551 [2024-12-10 04:14:07.900065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.551 [2024-12-10 04:14:07.900092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.551 [2024-12-10 04:14:07.900106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.551 [2024-12-10 04:14:07.900118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.551 [2024-12-10 04:14:07.900145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.551 qpair failed and we were unable to recover it. 00:26:13.551 [2024-12-10 04:14:07.909955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.551 [2024-12-10 04:14:07.910039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.551 [2024-12-10 04:14:07.910068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.551 [2024-12-10 04:14:07.910082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.551 [2024-12-10 04:14:07.910093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.551 [2024-12-10 04:14:07.910122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.551 qpair failed and we were unable to recover it. 00:26:13.551 [2024-12-10 04:14:07.919994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.551 [2024-12-10 04:14:07.920076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.551 [2024-12-10 04:14:07.920100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.551 [2024-12-10 04:14:07.920115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.551 [2024-12-10 04:14:07.920126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.551 [2024-12-10 04:14:07.920154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.551 qpair failed and we were unable to recover it. 00:26:13.551 [2024-12-10 04:14:07.930006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.551 [2024-12-10 04:14:07.930102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.551 [2024-12-10 04:14:07.930130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.551 [2024-12-10 04:14:07.930144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.551 [2024-12-10 04:14:07.930156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.551 [2024-12-10 04:14:07.930185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.551 qpair failed and we were unable to recover it. 00:26:13.813 [2024-12-10 04:14:07.940031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.813 [2024-12-10 04:14:07.940124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.813 [2024-12-10 04:14:07.940150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.813 [2024-12-10 04:14:07.940174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.813 [2024-12-10 04:14:07.940187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.813 [2024-12-10 04:14:07.940215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.813 qpair failed and we were unable to recover it. 00:26:13.813 [2024-12-10 04:14:07.950056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.813 [2024-12-10 04:14:07.950151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.813 [2024-12-10 04:14:07.950176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.813 [2024-12-10 04:14:07.950190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.813 [2024-12-10 04:14:07.950202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.813 [2024-12-10 04:14:07.950230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.813 qpair failed and we were unable to recover it. 00:26:13.813 [2024-12-10 04:14:07.960085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.813 [2024-12-10 04:14:07.960160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.813 [2024-12-10 04:14:07.960186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.813 [2024-12-10 04:14:07.960200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.813 [2024-12-10 04:14:07.960212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.813 [2024-12-10 04:14:07.960240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.813 qpair failed and we were unable to recover it. 00:26:13.813 [2024-12-10 04:14:07.970083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.813 [2024-12-10 04:14:07.970170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.813 [2024-12-10 04:14:07.970194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.813 [2024-12-10 04:14:07.970208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.813 [2024-12-10 04:14:07.970220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.813 [2024-12-10 04:14:07.970248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.813 qpair failed and we were unable to recover it. 00:26:13.813 [2024-12-10 04:14:07.980177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.813 [2024-12-10 04:14:07.980281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.813 [2024-12-10 04:14:07.980306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.813 [2024-12-10 04:14:07.980319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.813 [2024-12-10 04:14:07.980331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.813 [2024-12-10 04:14:07.980365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.813 qpair failed and we were unable to recover it. 00:26:13.813 [2024-12-10 04:14:07.990194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.813 [2024-12-10 04:14:07.990285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.813 [2024-12-10 04:14:07.990310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.813 [2024-12-10 04:14:07.990324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.813 [2024-12-10 04:14:07.990336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.813 [2024-12-10 04:14:07.990364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.813 qpair failed and we were unable to recover it. 00:26:13.813 [2024-12-10 04:14:08.000218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.813 [2024-12-10 04:14:08.000326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.813 [2024-12-10 04:14:08.000352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.813 [2024-12-10 04:14:08.000365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.813 [2024-12-10 04:14:08.000377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.813 [2024-12-10 04:14:08.000405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.813 qpair failed and we were unable to recover it. 00:26:13.813 [2024-12-10 04:14:08.010188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.813 [2024-12-10 04:14:08.010316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.813 [2024-12-10 04:14:08.010342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.813 [2024-12-10 04:14:08.010356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.813 [2024-12-10 04:14:08.010368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.813 [2024-12-10 04:14:08.010395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.813 qpair failed and we were unable to recover it. 00:26:13.813 [2024-12-10 04:14:08.020232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.813 [2024-12-10 04:14:08.020324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.813 [2024-12-10 04:14:08.020349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.813 [2024-12-10 04:14:08.020364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.813 [2024-12-10 04:14:08.020375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.813 [2024-12-10 04:14:08.020403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.813 qpair failed and we were unable to recover it. 00:26:13.813 [2024-12-10 04:14:08.030252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.814 [2024-12-10 04:14:08.030345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.814 [2024-12-10 04:14:08.030373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.814 [2024-12-10 04:14:08.030387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.814 [2024-12-10 04:14:08.030398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.814 [2024-12-10 04:14:08.030427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.814 qpair failed and we were unable to recover it. 00:26:13.814 [2024-12-10 04:14:08.040280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.814 [2024-12-10 04:14:08.040362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.814 [2024-12-10 04:14:08.040386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.814 [2024-12-10 04:14:08.040399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.814 [2024-12-10 04:14:08.040411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.814 [2024-12-10 04:14:08.040439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.814 qpair failed and we were unable to recover it. 00:26:13.814 [2024-12-10 04:14:08.050307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.814 [2024-12-10 04:14:08.050391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.814 [2024-12-10 04:14:08.050416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.814 [2024-12-10 04:14:08.050430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.814 [2024-12-10 04:14:08.050442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.814 [2024-12-10 04:14:08.050470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.814 qpair failed and we were unable to recover it. 00:26:13.814 [2024-12-10 04:14:08.060353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.814 [2024-12-10 04:14:08.060444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.814 [2024-12-10 04:14:08.060469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.814 [2024-12-10 04:14:08.060483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.814 [2024-12-10 04:14:08.060494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.814 [2024-12-10 04:14:08.060522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.814 qpair failed and we were unable to recover it. 00:26:13.814 [2024-12-10 04:14:08.070381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.814 [2024-12-10 04:14:08.070477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.814 [2024-12-10 04:14:08.070502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.814 [2024-12-10 04:14:08.070522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.814 [2024-12-10 04:14:08.070534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.814 [2024-12-10 04:14:08.070569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.814 qpair failed and we were unable to recover it. 00:26:13.814 [2024-12-10 04:14:08.080400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.814 [2024-12-10 04:14:08.080489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.814 [2024-12-10 04:14:08.080514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.814 [2024-12-10 04:14:08.080528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.814 [2024-12-10 04:14:08.080540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.814 [2024-12-10 04:14:08.080576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.814 qpair failed and we were unable to recover it. 00:26:13.814 [2024-12-10 04:14:08.090399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.814 [2024-12-10 04:14:08.090482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.814 [2024-12-10 04:14:08.090507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.814 [2024-12-10 04:14:08.090521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.814 [2024-12-10 04:14:08.090532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.814 [2024-12-10 04:14:08.090568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.814 qpair failed and we were unable to recover it. 00:26:13.814 [2024-12-10 04:14:08.100465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.814 [2024-12-10 04:14:08.100586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.814 [2024-12-10 04:14:08.100611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.814 [2024-12-10 04:14:08.100625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.814 [2024-12-10 04:14:08.100636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.814 [2024-12-10 04:14:08.100664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.814 qpair failed and we were unable to recover it. 00:26:13.814 [2024-12-10 04:14:08.110503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.814 [2024-12-10 04:14:08.110606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.814 [2024-12-10 04:14:08.110631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.814 [2024-12-10 04:14:08.110645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.814 [2024-12-10 04:14:08.110656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.814 [2024-12-10 04:14:08.110689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.814 qpair failed and we were unable to recover it. 00:26:13.814 [2024-12-10 04:14:08.120509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.814 [2024-12-10 04:14:08.120601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.814 [2024-12-10 04:14:08.120626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.814 [2024-12-10 04:14:08.120639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.814 [2024-12-10 04:14:08.120651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.814 [2024-12-10 04:14:08.120679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.814 qpair failed and we were unable to recover it. 00:26:13.814 [2024-12-10 04:14:08.130525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.814 [2024-12-10 04:14:08.130618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.814 [2024-12-10 04:14:08.130647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.814 [2024-12-10 04:14:08.130663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.814 [2024-12-10 04:14:08.130675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.814 [2024-12-10 04:14:08.130704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.814 qpair failed and we were unable to recover it. 00:26:13.814 [2024-12-10 04:14:08.140581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.814 [2024-12-10 04:14:08.140674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.814 [2024-12-10 04:14:08.140704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.814 [2024-12-10 04:14:08.140720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.814 [2024-12-10 04:14:08.140732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.814 [2024-12-10 04:14:08.140761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.814 qpair failed and we were unable to recover it. 00:26:13.814 [2024-12-10 04:14:08.150609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.814 [2024-12-10 04:14:08.150702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.814 [2024-12-10 04:14:08.150727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.814 [2024-12-10 04:14:08.150742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.814 [2024-12-10 04:14:08.150753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.814 [2024-12-10 04:14:08.150782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.814 qpair failed and we were unable to recover it. 00:26:13.814 [2024-12-10 04:14:08.160603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.814 [2024-12-10 04:14:08.160687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.814 [2024-12-10 04:14:08.160712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.815 [2024-12-10 04:14:08.160727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.815 [2024-12-10 04:14:08.160738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.815 [2024-12-10 04:14:08.160766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.815 qpair failed and we were unable to recover it. 00:26:13.815 [2024-12-10 04:14:08.170680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.815 [2024-12-10 04:14:08.170776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.815 [2024-12-10 04:14:08.170802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.815 [2024-12-10 04:14:08.170816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.815 [2024-12-10 04:14:08.170827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.815 [2024-12-10 04:14:08.170855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.815 qpair failed and we were unable to recover it. 00:26:13.815 [2024-12-10 04:14:08.180693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.815 [2024-12-10 04:14:08.180800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.815 [2024-12-10 04:14:08.180825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.815 [2024-12-10 04:14:08.180839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.815 [2024-12-10 04:14:08.180850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.815 [2024-12-10 04:14:08.180878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.815 qpair failed and we were unable to recover it. 00:26:13.815 [2024-12-10 04:14:08.190696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.815 [2024-12-10 04:14:08.190786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.815 [2024-12-10 04:14:08.190812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.815 [2024-12-10 04:14:08.190834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.815 [2024-12-10 04:14:08.190847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:13.815 [2024-12-10 04:14:08.190878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.815 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-10 04:14:08.200739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.075 [2024-12-10 04:14:08.200825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.075 [2024-12-10 04:14:08.200851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.075 [2024-12-10 04:14:08.200872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.075 [2024-12-10 04:14:08.200884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.075 [2024-12-10 04:14:08.200913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-10 04:14:08.210816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.075 [2024-12-10 04:14:08.210896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.075 [2024-12-10 04:14:08.210921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.075 [2024-12-10 04:14:08.210935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.075 [2024-12-10 04:14:08.210946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.075 [2024-12-10 04:14:08.210974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-10 04:14:08.220796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.075 [2024-12-10 04:14:08.220885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.075 [2024-12-10 04:14:08.220911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.075 [2024-12-10 04:14:08.220925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.075 [2024-12-10 04:14:08.220937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.075 [2024-12-10 04:14:08.220965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-10 04:14:08.230835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.075 [2024-12-10 04:14:08.230961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.075 [2024-12-10 04:14:08.230986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.075 [2024-12-10 04:14:08.231000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.075 [2024-12-10 04:14:08.231012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.075 [2024-12-10 04:14:08.231039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-10 04:14:08.240863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.075 [2024-12-10 04:14:08.240945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.075 [2024-12-10 04:14:08.240969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.075 [2024-12-10 04:14:08.240983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.075 [2024-12-10 04:14:08.240995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.075 [2024-12-10 04:14:08.241028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-10 04:14:08.250874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.075 [2024-12-10 04:14:08.251000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.075 [2024-12-10 04:14:08.251025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.075 [2024-12-10 04:14:08.251038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.075 [2024-12-10 04:14:08.251049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.075 [2024-12-10 04:14:08.251077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-10 04:14:08.260892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.075 [2024-12-10 04:14:08.260982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.075 [2024-12-10 04:14:08.261007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.075 [2024-12-10 04:14:08.261021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.075 [2024-12-10 04:14:08.261033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.075 [2024-12-10 04:14:08.261060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-10 04:14:08.270941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.076 [2024-12-10 04:14:08.271030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.076 [2024-12-10 04:14:08.271054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.076 [2024-12-10 04:14:08.271068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.076 [2024-12-10 04:14:08.271080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.076 [2024-12-10 04:14:08.271108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-10 04:14:08.280994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.076 [2024-12-10 04:14:08.281074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.076 [2024-12-10 04:14:08.281098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.076 [2024-12-10 04:14:08.281112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.076 [2024-12-10 04:14:08.281124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.076 [2024-12-10 04:14:08.281152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-10 04:14:08.290955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.076 [2024-12-10 04:14:08.291038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.076 [2024-12-10 04:14:08.291062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.076 [2024-12-10 04:14:08.291075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.076 [2024-12-10 04:14:08.291086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.076 [2024-12-10 04:14:08.291114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-10 04:14:08.301041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.076 [2024-12-10 04:14:08.301133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.076 [2024-12-10 04:14:08.301158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.076 [2024-12-10 04:14:08.301171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.076 [2024-12-10 04:14:08.301183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.076 [2024-12-10 04:14:08.301211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-10 04:14:08.311078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.076 [2024-12-10 04:14:08.311210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.076 [2024-12-10 04:14:08.311235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.076 [2024-12-10 04:14:08.311248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.076 [2024-12-10 04:14:08.311260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.076 [2024-12-10 04:14:08.311287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-10 04:14:08.321082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.076 [2024-12-10 04:14:08.321163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.076 [2024-12-10 04:14:08.321188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.076 [2024-12-10 04:14:08.321202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.076 [2024-12-10 04:14:08.321214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.076 [2024-12-10 04:14:08.321241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-10 04:14:08.331061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.076 [2024-12-10 04:14:08.331160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.076 [2024-12-10 04:14:08.331184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.076 [2024-12-10 04:14:08.331203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.076 [2024-12-10 04:14:08.331216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.076 [2024-12-10 04:14:08.331244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-10 04:14:08.341212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.076 [2024-12-10 04:14:08.341307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.076 [2024-12-10 04:14:08.341331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.076 [2024-12-10 04:14:08.341344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.076 [2024-12-10 04:14:08.341356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.076 [2024-12-10 04:14:08.341383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-10 04:14:08.351129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.076 [2024-12-10 04:14:08.351220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.076 [2024-12-10 04:14:08.351246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.076 [2024-12-10 04:14:08.351260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.076 [2024-12-10 04:14:08.351271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.076 [2024-12-10 04:14:08.351299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-10 04:14:08.361154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.076 [2024-12-10 04:14:08.361259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.076 [2024-12-10 04:14:08.361284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.076 [2024-12-10 04:14:08.361298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.076 [2024-12-10 04:14:08.361309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.076 [2024-12-10 04:14:08.361337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-10 04:14:08.371216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.076 [2024-12-10 04:14:08.371301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.076 [2024-12-10 04:14:08.371326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.076 [2024-12-10 04:14:08.371340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.076 [2024-12-10 04:14:08.371351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.076 [2024-12-10 04:14:08.371384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-10 04:14:08.381329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.076 [2024-12-10 04:14:08.381450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.076 [2024-12-10 04:14:08.381476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.076 [2024-12-10 04:14:08.381489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.076 [2024-12-10 04:14:08.381501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.076 [2024-12-10 04:14:08.381528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-10 04:14:08.391270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.076 [2024-12-10 04:14:08.391366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.076 [2024-12-10 04:14:08.391391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.076 [2024-12-10 04:14:08.391405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.076 [2024-12-10 04:14:08.391417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.076 [2024-12-10 04:14:08.391444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-10 04:14:08.401269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.076 [2024-12-10 04:14:08.401354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.076 [2024-12-10 04:14:08.401378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.077 [2024-12-10 04:14:08.401391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.077 [2024-12-10 04:14:08.401403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.077 [2024-12-10 04:14:08.401431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-10 04:14:08.411297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.077 [2024-12-10 04:14:08.411410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.077 [2024-12-10 04:14:08.411435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.077 [2024-12-10 04:14:08.411449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.077 [2024-12-10 04:14:08.411461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.077 [2024-12-10 04:14:08.411489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-10 04:14:08.421336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.077 [2024-12-10 04:14:08.421429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.077 [2024-12-10 04:14:08.421454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.077 [2024-12-10 04:14:08.421468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.077 [2024-12-10 04:14:08.421480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.077 [2024-12-10 04:14:08.421508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-10 04:14:08.431394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.077 [2024-12-10 04:14:08.431518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.077 [2024-12-10 04:14:08.431543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.077 [2024-12-10 04:14:08.431572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.077 [2024-12-10 04:14:08.431584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.077 [2024-12-10 04:14:08.431612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-10 04:14:08.441417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.077 [2024-12-10 04:14:08.441534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.077 [2024-12-10 04:14:08.441566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.077 [2024-12-10 04:14:08.441581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.077 [2024-12-10 04:14:08.441592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.077 [2024-12-10 04:14:08.441621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-10 04:14:08.451395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.077 [2024-12-10 04:14:08.451475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.077 [2024-12-10 04:14:08.451500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.077 [2024-12-10 04:14:08.451515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.077 [2024-12-10 04:14:08.451526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.077 [2024-12-10 04:14:08.451561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.336 [2024-12-10 04:14:08.461447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.336 [2024-12-10 04:14:08.461540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.336 [2024-12-10 04:14:08.461588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.336 [2024-12-10 04:14:08.461613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.336 [2024-12-10 04:14:08.461626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.336 [2024-12-10 04:14:08.461658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.336 qpair failed and we were unable to recover it. 00:26:14.336 [2024-12-10 04:14:08.471505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.336 [2024-12-10 04:14:08.471616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.336 [2024-12-10 04:14:08.471641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.336 [2024-12-10 04:14:08.471655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.336 [2024-12-10 04:14:08.471667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.336 [2024-12-10 04:14:08.471695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.336 qpair failed and we were unable to recover it. 00:26:14.336 [2024-12-10 04:14:08.481503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.336 [2024-12-10 04:14:08.481605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.336 [2024-12-10 04:14:08.481631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.336 [2024-12-10 04:14:08.481646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.336 [2024-12-10 04:14:08.481657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.337 [2024-12-10 04:14:08.481685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.337 qpair failed and we were unable to recover it. 00:26:14.337 [2024-12-10 04:14:08.491565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.337 [2024-12-10 04:14:08.491665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.337 [2024-12-10 04:14:08.491690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.337 [2024-12-10 04:14:08.491703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.337 [2024-12-10 04:14:08.491714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.337 [2024-12-10 04:14:08.491743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.337 qpair failed and we were unable to recover it. 00:26:14.337 [2024-12-10 04:14:08.501610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.337 [2024-12-10 04:14:08.501725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.337 [2024-12-10 04:14:08.501751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.337 [2024-12-10 04:14:08.501765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.337 [2024-12-10 04:14:08.501776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.337 [2024-12-10 04:14:08.501810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.337 qpair failed and we were unable to recover it. 00:26:14.337 [2024-12-10 04:14:08.511592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.337 [2024-12-10 04:14:08.511684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.337 [2024-12-10 04:14:08.511709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.337 [2024-12-10 04:14:08.511723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.337 [2024-12-10 04:14:08.511735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.337 [2024-12-10 04:14:08.511763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.337 qpair failed and we were unable to recover it. 00:26:14.337 [2024-12-10 04:14:08.521624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.337 [2024-12-10 04:14:08.521710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.337 [2024-12-10 04:14:08.521737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.337 [2024-12-10 04:14:08.521754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.337 [2024-12-10 04:14:08.521765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.337 [2024-12-10 04:14:08.521795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.337 qpair failed and we were unable to recover it. 00:26:14.337 [2024-12-10 04:14:08.531631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.337 [2024-12-10 04:14:08.531759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.337 [2024-12-10 04:14:08.531784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.337 [2024-12-10 04:14:08.531798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.337 [2024-12-10 04:14:08.531810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.337 [2024-12-10 04:14:08.531837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.337 qpair failed and we were unable to recover it. 00:26:14.337 [2024-12-10 04:14:08.541682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.337 [2024-12-10 04:14:08.541786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.337 [2024-12-10 04:14:08.541812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.337 [2024-12-10 04:14:08.541826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.337 [2024-12-10 04:14:08.541838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.337 [2024-12-10 04:14:08.541866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.337 qpair failed and we were unable to recover it. 00:26:14.337 [2024-12-10 04:14:08.551722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.337 [2024-12-10 04:14:08.551840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.337 [2024-12-10 04:14:08.551866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.337 [2024-12-10 04:14:08.551880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.337 [2024-12-10 04:14:08.551891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.337 [2024-12-10 04:14:08.551920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.337 qpair failed and we were unable to recover it. 00:26:14.337 [2024-12-10 04:14:08.561731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.337 [2024-12-10 04:14:08.561842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.337 [2024-12-10 04:14:08.561867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.337 [2024-12-10 04:14:08.561881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.337 [2024-12-10 04:14:08.561892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.337 [2024-12-10 04:14:08.561920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.337 qpair failed and we were unable to recover it. 00:26:14.337 [2024-12-10 04:14:08.571762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.337 [2024-12-10 04:14:08.571849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.337 [2024-12-10 04:14:08.571874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.337 [2024-12-10 04:14:08.571888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.337 [2024-12-10 04:14:08.571900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.337 [2024-12-10 04:14:08.571927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.337 qpair failed and we were unable to recover it. 00:26:14.337 [2024-12-10 04:14:08.581810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.337 [2024-12-10 04:14:08.581934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.337 [2024-12-10 04:14:08.581959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.337 [2024-12-10 04:14:08.581972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.337 [2024-12-10 04:14:08.581984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.337 [2024-12-10 04:14:08.582012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.337 qpair failed and we were unable to recover it. 00:26:14.337 [2024-12-10 04:14:08.591840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.337 [2024-12-10 04:14:08.591965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.337 [2024-12-10 04:14:08.591995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.337 [2024-12-10 04:14:08.592009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.337 [2024-12-10 04:14:08.592021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.337 [2024-12-10 04:14:08.592049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.337 qpair failed and we were unable to recover it. 00:26:14.337 [2024-12-10 04:14:08.601847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.337 [2024-12-10 04:14:08.601973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.337 [2024-12-10 04:14:08.601999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.337 [2024-12-10 04:14:08.602013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.337 [2024-12-10 04:14:08.602024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.337 [2024-12-10 04:14:08.602052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.337 qpair failed and we were unable to recover it. 00:26:14.337 [2024-12-10 04:14:08.611907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.337 [2024-12-10 04:14:08.612026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.337 [2024-12-10 04:14:08.612051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.337 [2024-12-10 04:14:08.612065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.337 [2024-12-10 04:14:08.612076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.337 [2024-12-10 04:14:08.612104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.337 qpair failed and we were unable to recover it. 00:26:14.337 [2024-12-10 04:14:08.621936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.338 [2024-12-10 04:14:08.622026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.338 [2024-12-10 04:14:08.622051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.338 [2024-12-10 04:14:08.622064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.338 [2024-12-10 04:14:08.622075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.338 [2024-12-10 04:14:08.622104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.338 qpair failed and we were unable to recover it. 00:26:14.338 [2024-12-10 04:14:08.631924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.338 [2024-12-10 04:14:08.632016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.338 [2024-12-10 04:14:08.632041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.338 [2024-12-10 04:14:08.632055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.338 [2024-12-10 04:14:08.632067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.338 [2024-12-10 04:14:08.632099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.338 qpair failed and we were unable to recover it. 00:26:14.338 [2024-12-10 04:14:08.641986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.338 [2024-12-10 04:14:08.642071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.338 [2024-12-10 04:14:08.642096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.338 [2024-12-10 04:14:08.642110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.338 [2024-12-10 04:14:08.642121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.338 [2024-12-10 04:14:08.642149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.338 qpair failed and we were unable to recover it. 00:26:14.338 [2024-12-10 04:14:08.652012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.338 [2024-12-10 04:14:08.652091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.338 [2024-12-10 04:14:08.652116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.338 [2024-12-10 04:14:08.652130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.338 [2024-12-10 04:14:08.652141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.338 [2024-12-10 04:14:08.652169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.338 qpair failed and we were unable to recover it. 00:26:14.338 [2024-12-10 04:14:08.662011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.338 [2024-12-10 04:14:08.662098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.338 [2024-12-10 04:14:08.662123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.338 [2024-12-10 04:14:08.662137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.338 [2024-12-10 04:14:08.662149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.338 [2024-12-10 04:14:08.662176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.338 qpair failed and we were unable to recover it. 00:26:14.338 [2024-12-10 04:14:08.672079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.338 [2024-12-10 04:14:08.672202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.338 [2024-12-10 04:14:08.672227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.338 [2024-12-10 04:14:08.672240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.338 [2024-12-10 04:14:08.672252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.338 [2024-12-10 04:14:08.672280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.338 qpair failed and we were unable to recover it. 00:26:14.338 [2024-12-10 04:14:08.682093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.338 [2024-12-10 04:14:08.682177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.338 [2024-12-10 04:14:08.682202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.338 [2024-12-10 04:14:08.682216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.338 [2024-12-10 04:14:08.682228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.338 [2024-12-10 04:14:08.682256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.338 qpair failed and we were unable to recover it. 00:26:14.338 [2024-12-10 04:14:08.692081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.338 [2024-12-10 04:14:08.692186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.338 [2024-12-10 04:14:08.692212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.338 [2024-12-10 04:14:08.692226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.338 [2024-12-10 04:14:08.692237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.338 [2024-12-10 04:14:08.692265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.338 qpair failed and we were unable to recover it. 00:26:14.338 [2024-12-10 04:14:08.702164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.338 [2024-12-10 04:14:08.702254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.338 [2024-12-10 04:14:08.702279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.338 [2024-12-10 04:14:08.702293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.338 [2024-12-10 04:14:08.702305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.338 [2024-12-10 04:14:08.702332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.338 qpair failed and we were unable to recover it. 00:26:14.338 [2024-12-10 04:14:08.712185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.338 [2024-12-10 04:14:08.712282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.338 [2024-12-10 04:14:08.712310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.338 [2024-12-10 04:14:08.712326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.338 [2024-12-10 04:14:08.712338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.338 [2024-12-10 04:14:08.712366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.338 qpair failed and we were unable to recover it. 00:26:14.598 [2024-12-10 04:14:08.722212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.598 [2024-12-10 04:14:08.722297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.598 [2024-12-10 04:14:08.722329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.598 [2024-12-10 04:14:08.722344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.598 [2024-12-10 04:14:08.722356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.598 [2024-12-10 04:14:08.722384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.598 qpair failed and we were unable to recover it. 00:26:14.598 [2024-12-10 04:14:08.732223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.598 [2024-12-10 04:14:08.732317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.598 [2024-12-10 04:14:08.732342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.598 [2024-12-10 04:14:08.732357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.598 [2024-12-10 04:14:08.732368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.598 [2024-12-10 04:14:08.732396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.598 qpair failed and we were unable to recover it. 00:26:14.598 [2024-12-10 04:14:08.742257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.598 [2024-12-10 04:14:08.742348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.598 [2024-12-10 04:14:08.742373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.598 [2024-12-10 04:14:08.742388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.598 [2024-12-10 04:14:08.742399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.598 [2024-12-10 04:14:08.742427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.598 qpair failed and we were unable to recover it. 00:26:14.598 [2024-12-10 04:14:08.752277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.598 [2024-12-10 04:14:08.752373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.598 [2024-12-10 04:14:08.752399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.598 [2024-12-10 04:14:08.752413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.598 [2024-12-10 04:14:08.752424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.598 [2024-12-10 04:14:08.752452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.598 qpair failed and we were unable to recover it. 00:26:14.598 [2024-12-10 04:14:08.762301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.598 [2024-12-10 04:14:08.762386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.598 [2024-12-10 04:14:08.762411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.598 [2024-12-10 04:14:08.762426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.598 [2024-12-10 04:14:08.762437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.598 [2024-12-10 04:14:08.762473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.598 qpair failed and we were unable to recover it. 00:26:14.598 [2024-12-10 04:14:08.772341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.598 [2024-12-10 04:14:08.772457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.598 [2024-12-10 04:14:08.772483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.598 [2024-12-10 04:14:08.772496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.598 [2024-12-10 04:14:08.772508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.598 [2024-12-10 04:14:08.772536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.598 qpair failed and we were unable to recover it. 00:26:14.598 [2024-12-10 04:14:08.782375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.598 [2024-12-10 04:14:08.782464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.598 [2024-12-10 04:14:08.782489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.598 [2024-12-10 04:14:08.782503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.598 [2024-12-10 04:14:08.782514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.598 [2024-12-10 04:14:08.782542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.598 qpair failed and we were unable to recover it. 00:26:14.598 [2024-12-10 04:14:08.792408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.598 [2024-12-10 04:14:08.792502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.598 [2024-12-10 04:14:08.792526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.598 [2024-12-10 04:14:08.792539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.598 [2024-12-10 04:14:08.792560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.598 [2024-12-10 04:14:08.792589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.598 qpair failed and we were unable to recover it. 00:26:14.598 [2024-12-10 04:14:08.802421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.598 [2024-12-10 04:14:08.802505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.598 [2024-12-10 04:14:08.802530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.598 [2024-12-10 04:14:08.802551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.598 [2024-12-10 04:14:08.802565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:14.598 [2024-12-10 04:14:08.802594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.598 qpair failed and we were unable to recover it. 00:26:14.598 [2024-12-10 04:14:08.812447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.598 [2024-12-10 04:14:08.812525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.598 [2024-12-10 04:14:08.812566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.598 [2024-12-10 04:14:08.812582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.598 [2024-12-10 04:14:08.812594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.598 [2024-12-10 04:14:08.812625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.598 qpair failed and we were unable to recover it. 00:26:14.598 [2024-12-10 04:14:08.822529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.598 [2024-12-10 04:14:08.822630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.598 [2024-12-10 04:14:08.822658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.598 [2024-12-10 04:14:08.822673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.598 [2024-12-10 04:14:08.822684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.598 [2024-12-10 04:14:08.822714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.598 qpair failed and we were unable to recover it. 00:26:14.598 [2024-12-10 04:14:08.832564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.599 [2024-12-10 04:14:08.832671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.599 [2024-12-10 04:14:08.832697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.599 [2024-12-10 04:14:08.832711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.599 [2024-12-10 04:14:08.832723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.599 [2024-12-10 04:14:08.832753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.599 qpair failed and we were unable to recover it. 00:26:14.599 [2024-12-10 04:14:08.842556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.599 [2024-12-10 04:14:08.842646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.599 [2024-12-10 04:14:08.842671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.599 [2024-12-10 04:14:08.842684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.599 [2024-12-10 04:14:08.842695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.599 [2024-12-10 04:14:08.842724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.599 qpair failed and we were unable to recover it. 00:26:14.599 [2024-12-10 04:14:08.852564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.599 [2024-12-10 04:14:08.852642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.599 [2024-12-10 04:14:08.852674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.599 [2024-12-10 04:14:08.852689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.599 [2024-12-10 04:14:08.852701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.599 [2024-12-10 04:14:08.852743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.599 qpair failed and we were unable to recover it. 00:26:14.599 [2024-12-10 04:14:08.862622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.599 [2024-12-10 04:14:08.862741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.599 [2024-12-10 04:14:08.862767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.599 [2024-12-10 04:14:08.862781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.599 [2024-12-10 04:14:08.862793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.599 [2024-12-10 04:14:08.862822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.599 qpair failed and we were unable to recover it. 00:26:14.599 [2024-12-10 04:14:08.872629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.599 [2024-12-10 04:14:08.872718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.599 [2024-12-10 04:14:08.872743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.599 [2024-12-10 04:14:08.872757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.599 [2024-12-10 04:14:08.872769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.599 [2024-12-10 04:14:08.872799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.599 qpair failed and we were unable to recover it. 00:26:14.599 [2024-12-10 04:14:08.882654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.599 [2024-12-10 04:14:08.882753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.599 [2024-12-10 04:14:08.882779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.599 [2024-12-10 04:14:08.882793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.599 [2024-12-10 04:14:08.882805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.599 [2024-12-10 04:14:08.882834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.599 qpair failed and we were unable to recover it. 00:26:14.599 [2024-12-10 04:14:08.892682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.599 [2024-12-10 04:14:08.892789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.599 [2024-12-10 04:14:08.892815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.599 [2024-12-10 04:14:08.892829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.599 [2024-12-10 04:14:08.892846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.599 [2024-12-10 04:14:08.892876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.599 qpair failed and we were unable to recover it. 00:26:14.599 [2024-12-10 04:14:08.902771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.599 [2024-12-10 04:14:08.902864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.599 [2024-12-10 04:14:08.902891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.599 [2024-12-10 04:14:08.902905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.599 [2024-12-10 04:14:08.902921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.599 [2024-12-10 04:14:08.902952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.599 qpair failed and we were unable to recover it. 00:26:14.599 [2024-12-10 04:14:08.912754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.599 [2024-12-10 04:14:08.912864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.599 [2024-12-10 04:14:08.912891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.599 [2024-12-10 04:14:08.912905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.599 [2024-12-10 04:14:08.912917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.599 [2024-12-10 04:14:08.912959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.599 qpair failed and we were unable to recover it. 00:26:14.599 [2024-12-10 04:14:08.922788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.599 [2024-12-10 04:14:08.922911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.599 [2024-12-10 04:14:08.922937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.599 [2024-12-10 04:14:08.922952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.599 [2024-12-10 04:14:08.922964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.599 [2024-12-10 04:14:08.922993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.599 qpair failed and we were unable to recover it. 00:26:14.599 [2024-12-10 04:14:08.932789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.599 [2024-12-10 04:14:08.932870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.599 [2024-12-10 04:14:08.932896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.599 [2024-12-10 04:14:08.932910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.599 [2024-12-10 04:14:08.932922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.599 [2024-12-10 04:14:08.932951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.599 qpair failed and we were unable to recover it. 00:26:14.599 [2024-12-10 04:14:08.942827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.599 [2024-12-10 04:14:08.942914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.599 [2024-12-10 04:14:08.942939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.599 [2024-12-10 04:14:08.942953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.599 [2024-12-10 04:14:08.942965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.599 [2024-12-10 04:14:08.942995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.599 qpair failed and we were unable to recover it. 00:26:14.599 [2024-12-10 04:14:08.952870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.599 [2024-12-10 04:14:08.952963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.599 [2024-12-10 04:14:08.952989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.599 [2024-12-10 04:14:08.953003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.599 [2024-12-10 04:14:08.953014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.599 [2024-12-10 04:14:08.953044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.599 qpair failed and we were unable to recover it. 00:26:14.599 [2024-12-10 04:14:08.962905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.599 [2024-12-10 04:14:08.963026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.599 [2024-12-10 04:14:08.963052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.599 [2024-12-10 04:14:08.963066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.600 [2024-12-10 04:14:08.963078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.600 [2024-12-10 04:14:08.963107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.600 qpair failed and we were unable to recover it. 00:26:14.600 [2024-12-10 04:14:08.972891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.600 [2024-12-10 04:14:08.972972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.600 [2024-12-10 04:14:08.972998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.600 [2024-12-10 04:14:08.973012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.600 [2024-12-10 04:14:08.973024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.600 [2024-12-10 04:14:08.973054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.600 qpair failed and we were unable to recover it. 00:26:14.859 [2024-12-10 04:14:08.982960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.859 [2024-12-10 04:14:08.983052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.859 [2024-12-10 04:14:08.983084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.859 [2024-12-10 04:14:08.983099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.859 [2024-12-10 04:14:08.983110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.859 [2024-12-10 04:14:08.983140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.859 qpair failed and we were unable to recover it. 00:26:14.859 [2024-12-10 04:14:08.992968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.859 [2024-12-10 04:14:08.993061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.859 [2024-12-10 04:14:08.993087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.859 [2024-12-10 04:14:08.993101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.859 [2024-12-10 04:14:08.993113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.859 [2024-12-10 04:14:08.993142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.859 qpair failed and we were unable to recover it. 00:26:14.859 [2024-12-10 04:14:09.003004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.859 [2024-12-10 04:14:09.003092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.859 [2024-12-10 04:14:09.003117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.859 [2024-12-10 04:14:09.003131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.859 [2024-12-10 04:14:09.003143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.859 [2024-12-10 04:14:09.003172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.859 qpair failed and we were unable to recover it. 00:26:14.859 [2024-12-10 04:14:09.013150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.859 [2024-12-10 04:14:09.013256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.859 [2024-12-10 04:14:09.013282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.859 [2024-12-10 04:14:09.013296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.859 [2024-12-10 04:14:09.013307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.859 [2024-12-10 04:14:09.013336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.859 qpair failed and we were unable to recover it. 00:26:14.859 [2024-12-10 04:14:09.023114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.859 [2024-12-10 04:14:09.023200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.859 [2024-12-10 04:14:09.023227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.859 [2024-12-10 04:14:09.023241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.859 [2024-12-10 04:14:09.023262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.859 [2024-12-10 04:14:09.023294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.859 qpair failed and we were unable to recover it. 00:26:14.859 [2024-12-10 04:14:09.033168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.859 [2024-12-10 04:14:09.033276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.859 [2024-12-10 04:14:09.033303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.859 [2024-12-10 04:14:09.033317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.859 [2024-12-10 04:14:09.033329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.859 [2024-12-10 04:14:09.033359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.859 qpair failed and we were unable to recover it. 00:26:14.859 [2024-12-10 04:14:09.043180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.859 [2024-12-10 04:14:09.043266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.859 [2024-12-10 04:14:09.043291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.860 [2024-12-10 04:14:09.043305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.860 [2024-12-10 04:14:09.043317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.860 [2024-12-10 04:14:09.043346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.860 qpair failed and we were unable to recover it. 00:26:14.860 [2024-12-10 04:14:09.053172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.860 [2024-12-10 04:14:09.053263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.860 [2024-12-10 04:14:09.053288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.860 [2024-12-10 04:14:09.053303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.860 [2024-12-10 04:14:09.053314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.860 [2024-12-10 04:14:09.053343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.860 qpair failed and we were unable to recover it. 00:26:14.860 [2024-12-10 04:14:09.063179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.860 [2024-12-10 04:14:09.063269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.860 [2024-12-10 04:14:09.063295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.860 [2024-12-10 04:14:09.063309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.860 [2024-12-10 04:14:09.063321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.860 [2024-12-10 04:14:09.063363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.860 qpair failed and we were unable to recover it. 00:26:14.860 [2024-12-10 04:14:09.073201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.860 [2024-12-10 04:14:09.073330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.860 [2024-12-10 04:14:09.073355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.860 [2024-12-10 04:14:09.073369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.860 [2024-12-10 04:14:09.073382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.860 [2024-12-10 04:14:09.073411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.860 qpair failed and we were unable to recover it. 00:26:14.860 [2024-12-10 04:14:09.083234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.860 [2024-12-10 04:14:09.083316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.860 [2024-12-10 04:14:09.083341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.860 [2024-12-10 04:14:09.083355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.860 [2024-12-10 04:14:09.083366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.860 [2024-12-10 04:14:09.083396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.860 qpair failed and we were unable to recover it. 00:26:14.860 [2024-12-10 04:14:09.093232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.860 [2024-12-10 04:14:09.093313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.860 [2024-12-10 04:14:09.093338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.860 [2024-12-10 04:14:09.093351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.860 [2024-12-10 04:14:09.093363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.860 [2024-12-10 04:14:09.093391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.860 qpair failed and we were unable to recover it. 00:26:14.860 [2024-12-10 04:14:09.103304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.860 [2024-12-10 04:14:09.103395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.860 [2024-12-10 04:14:09.103421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.860 [2024-12-10 04:14:09.103436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.860 [2024-12-10 04:14:09.103447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.860 [2024-12-10 04:14:09.103476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.860 qpair failed and we were unable to recover it. 00:26:14.860 [2024-12-10 04:14:09.113332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.860 [2024-12-10 04:14:09.113462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.860 [2024-12-10 04:14:09.113494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.860 [2024-12-10 04:14:09.113509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.860 [2024-12-10 04:14:09.113521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.860 [2024-12-10 04:14:09.113557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.860 qpair failed and we were unable to recover it. 00:26:14.860 [2024-12-10 04:14:09.123355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.860 [2024-12-10 04:14:09.123455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.860 [2024-12-10 04:14:09.123480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.860 [2024-12-10 04:14:09.123495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.860 [2024-12-10 04:14:09.123506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.860 [2024-12-10 04:14:09.123535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.860 qpair failed and we were unable to recover it. 00:26:14.860 [2024-12-10 04:14:09.133397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.860 [2024-12-10 04:14:09.133514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.860 [2024-12-10 04:14:09.133540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.860 [2024-12-10 04:14:09.133565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.860 [2024-12-10 04:14:09.133577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.860 [2024-12-10 04:14:09.133619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.860 qpair failed and we were unable to recover it. 00:26:14.860 [2024-12-10 04:14:09.143460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.860 [2024-12-10 04:14:09.143555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.860 [2024-12-10 04:14:09.143581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.860 [2024-12-10 04:14:09.143595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.860 [2024-12-10 04:14:09.143607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.860 [2024-12-10 04:14:09.143637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.860 qpair failed and we were unable to recover it. 00:26:14.860 [2024-12-10 04:14:09.153436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.860 [2024-12-10 04:14:09.153571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.860 [2024-12-10 04:14:09.153597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.860 [2024-12-10 04:14:09.153617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.860 [2024-12-10 04:14:09.153630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.860 [2024-12-10 04:14:09.153660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.860 qpair failed and we were unable to recover it. 00:26:14.860 [2024-12-10 04:14:09.163433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.860 [2024-12-10 04:14:09.163514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.860 [2024-12-10 04:14:09.163540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.860 [2024-12-10 04:14:09.163562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.860 [2024-12-10 04:14:09.163575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.860 [2024-12-10 04:14:09.163604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.860 qpair failed and we were unable to recover it. 00:26:14.860 [2024-12-10 04:14:09.173479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.861 [2024-12-10 04:14:09.173586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.861 [2024-12-10 04:14:09.173615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.861 [2024-12-10 04:14:09.173630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.861 [2024-12-10 04:14:09.173642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.861 [2024-12-10 04:14:09.173672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.861 qpair failed and we were unable to recover it. 00:26:14.861 [2024-12-10 04:14:09.183565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.861 [2024-12-10 04:14:09.183657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.861 [2024-12-10 04:14:09.183682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.861 [2024-12-10 04:14:09.183696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.861 [2024-12-10 04:14:09.183708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.861 [2024-12-10 04:14:09.183737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.861 qpair failed and we were unable to recover it. 00:26:14.861 [2024-12-10 04:14:09.193535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.861 [2024-12-10 04:14:09.193641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.861 [2024-12-10 04:14:09.193666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.861 [2024-12-10 04:14:09.193680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.861 [2024-12-10 04:14:09.193692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.861 [2024-12-10 04:14:09.193727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.861 qpair failed and we were unable to recover it. 00:26:14.861 [2024-12-10 04:14:09.203561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.861 [2024-12-10 04:14:09.203650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.861 [2024-12-10 04:14:09.203675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.861 [2024-12-10 04:14:09.203689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.861 [2024-12-10 04:14:09.203701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.861 [2024-12-10 04:14:09.203730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.861 qpair failed and we were unable to recover it. 00:26:14.861 [2024-12-10 04:14:09.213585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.861 [2024-12-10 04:14:09.213666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.861 [2024-12-10 04:14:09.213692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.861 [2024-12-10 04:14:09.213706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.861 [2024-12-10 04:14:09.213718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.861 [2024-12-10 04:14:09.213747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.861 qpair failed and we were unable to recover it. 00:26:14.861 [2024-12-10 04:14:09.223618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.861 [2024-12-10 04:14:09.223714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.861 [2024-12-10 04:14:09.223739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.861 [2024-12-10 04:14:09.223753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.861 [2024-12-10 04:14:09.223765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.861 [2024-12-10 04:14:09.223794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.861 qpair failed and we were unable to recover it. 00:26:14.861 [2024-12-10 04:14:09.233659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.861 [2024-12-10 04:14:09.233752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.861 [2024-12-10 04:14:09.233778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.861 [2024-12-10 04:14:09.233792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.861 [2024-12-10 04:14:09.233804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:14.861 [2024-12-10 04:14:09.233833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.861 qpair failed and we were unable to recover it. 00:26:15.122 [2024-12-10 04:14:09.243713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.122 [2024-12-10 04:14:09.243838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.122 [2024-12-10 04:14:09.243869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.122 [2024-12-10 04:14:09.243884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.122 [2024-12-10 04:14:09.243895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.122 [2024-12-10 04:14:09.243925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.122 qpair failed and we were unable to recover it. 00:26:15.122 [2024-12-10 04:14:09.253678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.122 [2024-12-10 04:14:09.253761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.122 [2024-12-10 04:14:09.253787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.122 [2024-12-10 04:14:09.253801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.122 [2024-12-10 04:14:09.253813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.122 [2024-12-10 04:14:09.253842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.122 qpair failed and we were unable to recover it. 00:26:15.122 [2024-12-10 04:14:09.263731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.122 [2024-12-10 04:14:09.263821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.122 [2024-12-10 04:14:09.263846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.122 [2024-12-10 04:14:09.263861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.122 [2024-12-10 04:14:09.263872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.122 [2024-12-10 04:14:09.263901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.122 qpair failed and we were unable to recover it. 00:26:15.122 [2024-12-10 04:14:09.273772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.122 [2024-12-10 04:14:09.273859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.122 [2024-12-10 04:14:09.273885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.122 [2024-12-10 04:14:09.273899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.122 [2024-12-10 04:14:09.273911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.122 [2024-12-10 04:14:09.273940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.122 qpair failed and we were unable to recover it. 00:26:15.122 [2024-12-10 04:14:09.283774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.122 [2024-12-10 04:14:09.283860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.122 [2024-12-10 04:14:09.283888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.122 [2024-12-10 04:14:09.283911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.122 [2024-12-10 04:14:09.283925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.123 [2024-12-10 04:14:09.283955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.123 qpair failed and we were unable to recover it. 00:26:15.123 [2024-12-10 04:14:09.293794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.123 [2024-12-10 04:14:09.293874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.123 [2024-12-10 04:14:09.293898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.123 [2024-12-10 04:14:09.293911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.123 [2024-12-10 04:14:09.293923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.123 [2024-12-10 04:14:09.293953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.123 qpair failed and we were unable to recover it. 00:26:15.123 [2024-12-10 04:14:09.303895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.123 [2024-12-10 04:14:09.303985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.123 [2024-12-10 04:14:09.304011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.123 [2024-12-10 04:14:09.304025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.123 [2024-12-10 04:14:09.304037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.123 [2024-12-10 04:14:09.304066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.123 qpair failed and we were unable to recover it. 00:26:15.123 [2024-12-10 04:14:09.313871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.123 [2024-12-10 04:14:09.313962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.123 [2024-12-10 04:14:09.313987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.123 [2024-12-10 04:14:09.314001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.123 [2024-12-10 04:14:09.314013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.123 [2024-12-10 04:14:09.314043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.123 qpair failed and we were unable to recover it. 00:26:15.123 [2024-12-10 04:14:09.323884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.123 [2024-12-10 04:14:09.323964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.123 [2024-12-10 04:14:09.323989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.123 [2024-12-10 04:14:09.324002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.123 [2024-12-10 04:14:09.324014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.123 [2024-12-10 04:14:09.324050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.123 qpair failed and we were unable to recover it. 00:26:15.123 [2024-12-10 04:14:09.333910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.123 [2024-12-10 04:14:09.333994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.123 [2024-12-10 04:14:09.334019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.123 [2024-12-10 04:14:09.334032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.123 [2024-12-10 04:14:09.334044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.123 [2024-12-10 04:14:09.334073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.123 qpair failed and we were unable to recover it. 00:26:15.123 [2024-12-10 04:14:09.343958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.123 [2024-12-10 04:14:09.344048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.123 [2024-12-10 04:14:09.344074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.123 [2024-12-10 04:14:09.344088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.123 [2024-12-10 04:14:09.344099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.123 [2024-12-10 04:14:09.344129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.123 qpair failed and we were unable to recover it. 00:26:15.123 [2024-12-10 04:14:09.354000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.123 [2024-12-10 04:14:09.354137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.123 [2024-12-10 04:14:09.354163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.123 [2024-12-10 04:14:09.354176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.123 [2024-12-10 04:14:09.354188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.123 [2024-12-10 04:14:09.354217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.123 qpair failed and we were unable to recover it. 00:26:15.123 [2024-12-10 04:14:09.363994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.123 [2024-12-10 04:14:09.364076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.123 [2024-12-10 04:14:09.364102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.123 [2024-12-10 04:14:09.364116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.123 [2024-12-10 04:14:09.364128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.123 [2024-12-10 04:14:09.364157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.123 qpair failed and we were unable to recover it. 00:26:15.123 [2024-12-10 04:14:09.374022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.123 [2024-12-10 04:14:09.374104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.123 [2024-12-10 04:14:09.374129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.123 [2024-12-10 04:14:09.374143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.123 [2024-12-10 04:14:09.374155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.123 [2024-12-10 04:14:09.374184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.123 qpair failed and we were unable to recover it. 00:26:15.123 [2024-12-10 04:14:09.384061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.123 [2024-12-10 04:14:09.384145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.123 [2024-12-10 04:14:09.384170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.123 [2024-12-10 04:14:09.384184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.123 [2024-12-10 04:14:09.384196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.123 [2024-12-10 04:14:09.384226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.123 qpair failed and we were unable to recover it. 00:26:15.123 [2024-12-10 04:14:09.394089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.123 [2024-12-10 04:14:09.394188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.123 [2024-12-10 04:14:09.394213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.123 [2024-12-10 04:14:09.394227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.123 [2024-12-10 04:14:09.394239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.123 [2024-12-10 04:14:09.394268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.123 qpair failed and we were unable to recover it. 00:26:15.123 [2024-12-10 04:14:09.404130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.123 [2024-12-10 04:14:09.404212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.123 [2024-12-10 04:14:09.404238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.123 [2024-12-10 04:14:09.404252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.123 [2024-12-10 04:14:09.404264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.123 [2024-12-10 04:14:09.404293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.123 qpair failed and we were unable to recover it. 00:26:15.123 [2024-12-10 04:14:09.414123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.123 [2024-12-10 04:14:09.414215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.123 [2024-12-10 04:14:09.414246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.123 [2024-12-10 04:14:09.414261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.123 [2024-12-10 04:14:09.414273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.123 [2024-12-10 04:14:09.414303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.123 qpair failed and we were unable to recover it. 00:26:15.123 [2024-12-10 04:14:09.424203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.124 [2024-12-10 04:14:09.424324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.124 [2024-12-10 04:14:09.424349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.124 [2024-12-10 04:14:09.424363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.124 [2024-12-10 04:14:09.424375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.124 [2024-12-10 04:14:09.424404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.124 qpair failed and we were unable to recover it. 00:26:15.124 [2024-12-10 04:14:09.434229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.124 [2024-12-10 04:14:09.434325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.124 [2024-12-10 04:14:09.434351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.124 [2024-12-10 04:14:09.434365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.124 [2024-12-10 04:14:09.434377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.124 [2024-12-10 04:14:09.434406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.124 qpair failed and we were unable to recover it. 00:26:15.124 [2024-12-10 04:14:09.444242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.124 [2024-12-10 04:14:09.444328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.124 [2024-12-10 04:14:09.444353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.124 [2024-12-10 04:14:09.444367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.124 [2024-12-10 04:14:09.444379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.124 [2024-12-10 04:14:09.444409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.124 qpair failed and we were unable to recover it. 00:26:15.124 [2024-12-10 04:14:09.454238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.124 [2024-12-10 04:14:09.454322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.124 [2024-12-10 04:14:09.454347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.124 [2024-12-10 04:14:09.454361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.124 [2024-12-10 04:14:09.454379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.124 [2024-12-10 04:14:09.454409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.124 qpair failed and we were unable to recover it. 00:26:15.124 [2024-12-10 04:14:09.464318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.124 [2024-12-10 04:14:09.464412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.124 [2024-12-10 04:14:09.464436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.124 [2024-12-10 04:14:09.464449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.124 [2024-12-10 04:14:09.464461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.124 [2024-12-10 04:14:09.464490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.124 qpair failed and we were unable to recover it. 00:26:15.124 [2024-12-10 04:14:09.474332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.124 [2024-12-10 04:14:09.474453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.124 [2024-12-10 04:14:09.474476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.124 [2024-12-10 04:14:09.474490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.124 [2024-12-10 04:14:09.474502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.124 [2024-12-10 04:14:09.474532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.124 qpair failed and we were unable to recover it. 00:26:15.124 [2024-12-10 04:14:09.484351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.124 [2024-12-10 04:14:09.484428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.124 [2024-12-10 04:14:09.484452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.124 [2024-12-10 04:14:09.484466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.124 [2024-12-10 04:14:09.484477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.124 [2024-12-10 04:14:09.484508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.124 qpair failed and we were unable to recover it. 00:26:15.124 [2024-12-10 04:14:09.494373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.124 [2024-12-10 04:14:09.494459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.124 [2024-12-10 04:14:09.494483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.124 [2024-12-10 04:14:09.494497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.124 [2024-12-10 04:14:09.494509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.124 [2024-12-10 04:14:09.494538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.124 qpair failed and we were unable to recover it. 00:26:15.386 [2024-12-10 04:14:09.504436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.386 [2024-12-10 04:14:09.504569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.386 [2024-12-10 04:14:09.504597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.386 [2024-12-10 04:14:09.504612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.386 [2024-12-10 04:14:09.504623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.386 [2024-12-10 04:14:09.504653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.386 qpair failed and we were unable to recover it. 00:26:15.386 [2024-12-10 04:14:09.514417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.386 [2024-12-10 04:14:09.514516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.386 [2024-12-10 04:14:09.514552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.386 [2024-12-10 04:14:09.514571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.386 [2024-12-10 04:14:09.514584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.386 [2024-12-10 04:14:09.514614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.386 qpair failed and we were unable to recover it. 00:26:15.386 [2024-12-10 04:14:09.524482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.386 [2024-12-10 04:14:09.524570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.386 [2024-12-10 04:14:09.524595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.386 [2024-12-10 04:14:09.524613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.386 [2024-12-10 04:14:09.524626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.386 [2024-12-10 04:14:09.524656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.386 qpair failed and we were unable to recover it. 00:26:15.386 [2024-12-10 04:14:09.534454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.386 [2024-12-10 04:14:09.534535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.386 [2024-12-10 04:14:09.534566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.386 [2024-12-10 04:14:09.534581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.386 [2024-12-10 04:14:09.534594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.386 [2024-12-10 04:14:09.534623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.386 qpair failed and we were unable to recover it. 00:26:15.386 [2024-12-10 04:14:09.544497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.386 [2024-12-10 04:14:09.544593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.386 [2024-12-10 04:14:09.544625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.386 [2024-12-10 04:14:09.544640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.386 [2024-12-10 04:14:09.544652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.386 [2024-12-10 04:14:09.544682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.386 qpair failed and we were unable to recover it. 00:26:15.386 [2024-12-10 04:14:09.554529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.386 [2024-12-10 04:14:09.554630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.386 [2024-12-10 04:14:09.554660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.386 [2024-12-10 04:14:09.554678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.386 [2024-12-10 04:14:09.554691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.386 [2024-12-10 04:14:09.554722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.386 qpair failed and we were unable to recover it. 00:26:15.386 [2024-12-10 04:14:09.564565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.386 [2024-12-10 04:14:09.564665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.386 [2024-12-10 04:14:09.564692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.386 [2024-12-10 04:14:09.564707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.386 [2024-12-10 04:14:09.564721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.386 [2024-12-10 04:14:09.564751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.386 qpair failed and we were unable to recover it. 00:26:15.386 [2024-12-10 04:14:09.574588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.386 [2024-12-10 04:14:09.574666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.386 [2024-12-10 04:14:09.574691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.386 [2024-12-10 04:14:09.574705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.387 [2024-12-10 04:14:09.574717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.387 [2024-12-10 04:14:09.574747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.387 qpair failed and we were unable to recover it. 00:26:15.387 [2024-12-10 04:14:09.584705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.387 [2024-12-10 04:14:09.584805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.387 [2024-12-10 04:14:09.584830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.387 [2024-12-10 04:14:09.584844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.387 [2024-12-10 04:14:09.584862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.387 [2024-12-10 04:14:09.584892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.387 qpair failed and we were unable to recover it. 00:26:15.387 [2024-12-10 04:14:09.594663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.387 [2024-12-10 04:14:09.594749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.387 [2024-12-10 04:14:09.594777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.387 [2024-12-10 04:14:09.594794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.387 [2024-12-10 04:14:09.594806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.387 [2024-12-10 04:14:09.594838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.387 qpair failed and we were unable to recover it. 00:26:15.387 [2024-12-10 04:14:09.604783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.387 [2024-12-10 04:14:09.604871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.387 [2024-12-10 04:14:09.604895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.387 [2024-12-10 04:14:09.604909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.387 [2024-12-10 04:14:09.604921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.387 [2024-12-10 04:14:09.604951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.387 qpair failed and we were unable to recover it. 00:26:15.387 [2024-12-10 04:14:09.614712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.387 [2024-12-10 04:14:09.614800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.387 [2024-12-10 04:14:09.614825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.387 [2024-12-10 04:14:09.614839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.387 [2024-12-10 04:14:09.614851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.387 [2024-12-10 04:14:09.614881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.387 qpair failed and we were unable to recover it. 00:26:15.387 [2024-12-10 04:14:09.624770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.387 [2024-12-10 04:14:09.624909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.387 [2024-12-10 04:14:09.624933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.387 [2024-12-10 04:14:09.624948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.387 [2024-12-10 04:14:09.624960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.387 [2024-12-10 04:14:09.624989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.387 qpair failed and we were unable to recover it. 00:26:15.387 [2024-12-10 04:14:09.634754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.387 [2024-12-10 04:14:09.634841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.387 [2024-12-10 04:14:09.634866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.387 [2024-12-10 04:14:09.634880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.387 [2024-12-10 04:14:09.634892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.387 [2024-12-10 04:14:09.634922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.387 qpair failed and we were unable to recover it. 00:26:15.387 [2024-12-10 04:14:09.644793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.387 [2024-12-10 04:14:09.644875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.387 [2024-12-10 04:14:09.644900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.387 [2024-12-10 04:14:09.644914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.387 [2024-12-10 04:14:09.644926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.387 [2024-12-10 04:14:09.644956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.387 qpair failed and we were unable to recover it. 00:26:15.387 [2024-12-10 04:14:09.654852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.387 [2024-12-10 04:14:09.654950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.387 [2024-12-10 04:14:09.654975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.387 [2024-12-10 04:14:09.654989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.387 [2024-12-10 04:14:09.655002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:15.387 [2024-12-10 04:14:09.655032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.387 qpair failed and we were unable to recover it. 00:26:15.387 [2024-12-10 04:14:09.664863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.387 [2024-12-10 04:14:09.664952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.387 [2024-12-10 04:14:09.664982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.387 [2024-12-10 04:14:09.664998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.387 [2024-12-10 04:14:09.665010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.387 [2024-12-10 04:14:09.665042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.387 qpair failed and we were unable to recover it. 00:26:15.387 [2024-12-10 04:14:09.674910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.387 [2024-12-10 04:14:09.675043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.387 [2024-12-10 04:14:09.675076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.387 [2024-12-10 04:14:09.675091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.387 [2024-12-10 04:14:09.675103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.387 [2024-12-10 04:14:09.675133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.387 qpair failed and we were unable to recover it. 00:26:15.387 [2024-12-10 04:14:09.684889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.387 [2024-12-10 04:14:09.684969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.387 [2024-12-10 04:14:09.684996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.387 [2024-12-10 04:14:09.685010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.387 [2024-12-10 04:14:09.685022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.387 [2024-12-10 04:14:09.685053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.387 qpair failed and we were unable to recover it. 00:26:15.387 [2024-12-10 04:14:09.694924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.387 [2024-12-10 04:14:09.695004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.387 [2024-12-10 04:14:09.695031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.387 [2024-12-10 04:14:09.695045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.387 [2024-12-10 04:14:09.695058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.387 [2024-12-10 04:14:09.695088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.387 qpair failed and we were unable to recover it. 00:26:15.388 [2024-12-10 04:14:09.704969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.388 [2024-12-10 04:14:09.705056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.388 [2024-12-10 04:14:09.705082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.388 [2024-12-10 04:14:09.705096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.388 [2024-12-10 04:14:09.705109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.388 [2024-12-10 04:14:09.705139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.388 qpair failed and we were unable to recover it. 00:26:15.388 [2024-12-10 04:14:09.714990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.388 [2024-12-10 04:14:09.715084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.388 [2024-12-10 04:14:09.715110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.388 [2024-12-10 04:14:09.715131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.388 [2024-12-10 04:14:09.715144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.388 [2024-12-10 04:14:09.715174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.388 qpair failed and we were unable to recover it. 00:26:15.388 [2024-12-10 04:14:09.725011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.388 [2024-12-10 04:14:09.725098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.388 [2024-12-10 04:14:09.725124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.388 [2024-12-10 04:14:09.725138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.388 [2024-12-10 04:14:09.725150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.388 [2024-12-10 04:14:09.725180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.388 qpair failed and we were unable to recover it. 00:26:15.388 [2024-12-10 04:14:09.735099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.388 [2024-12-10 04:14:09.735200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.388 [2024-12-10 04:14:09.735229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.388 [2024-12-10 04:14:09.735244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.388 [2024-12-10 04:14:09.735256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.388 [2024-12-10 04:14:09.735287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.388 qpair failed and we were unable to recover it. 00:26:15.388 [2024-12-10 04:14:09.745108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.388 [2024-12-10 04:14:09.745201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.388 [2024-12-10 04:14:09.745227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.388 [2024-12-10 04:14:09.745242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.388 [2024-12-10 04:14:09.745254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.388 [2024-12-10 04:14:09.745285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.388 qpair failed and we were unable to recover it. 00:26:15.388 [2024-12-10 04:14:09.755095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.388 [2024-12-10 04:14:09.755182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.388 [2024-12-10 04:14:09.755209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.388 [2024-12-10 04:14:09.755223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.388 [2024-12-10 04:14:09.755236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.388 [2024-12-10 04:14:09.755271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.388 qpair failed and we were unable to recover it. 00:26:15.388 [2024-12-10 04:14:09.765107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.388 [2024-12-10 04:14:09.765194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.388 [2024-12-10 04:14:09.765219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.388 [2024-12-10 04:14:09.765233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.388 [2024-12-10 04:14:09.765246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.388 [2024-12-10 04:14:09.765277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.388 qpair failed and we were unable to recover it. 00:26:15.649 [2024-12-10 04:14:09.775168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.649 [2024-12-10 04:14:09.775253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.649 [2024-12-10 04:14:09.775279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.649 [2024-12-10 04:14:09.775294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.649 [2024-12-10 04:14:09.775307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.649 [2024-12-10 04:14:09.775337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.649 qpair failed and we were unable to recover it. 00:26:15.649 [2024-12-10 04:14:09.785191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.649 [2024-12-10 04:14:09.785277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.649 [2024-12-10 04:14:09.785303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.649 [2024-12-10 04:14:09.785318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.649 [2024-12-10 04:14:09.785330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.649 [2024-12-10 04:14:09.785372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.649 qpair failed and we were unable to recover it. 00:26:15.649 [2024-12-10 04:14:09.795236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.649 [2024-12-10 04:14:09.795374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.649 [2024-12-10 04:14:09.795403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.649 [2024-12-10 04:14:09.795419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.649 [2024-12-10 04:14:09.795431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.649 [2024-12-10 04:14:09.795462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.649 qpair failed and we were unable to recover it. 00:26:15.649 [2024-12-10 04:14:09.805252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.649 [2024-12-10 04:14:09.805383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.649 [2024-12-10 04:14:09.805409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.649 [2024-12-10 04:14:09.805425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.649 [2024-12-10 04:14:09.805437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.649 [2024-12-10 04:14:09.805467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.649 qpair failed and we were unable to recover it. 00:26:15.649 [2024-12-10 04:14:09.815281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.649 [2024-12-10 04:14:09.815402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.649 [2024-12-10 04:14:09.815428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.649 [2024-12-10 04:14:09.815443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.649 [2024-12-10 04:14:09.815456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.650 [2024-12-10 04:14:09.815486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.650 qpair failed and we were unable to recover it. 00:26:15.650 [2024-12-10 04:14:09.825318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.650 [2024-12-10 04:14:09.825407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.650 [2024-12-10 04:14:09.825432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.650 [2024-12-10 04:14:09.825446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.650 [2024-12-10 04:14:09.825458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.650 [2024-12-10 04:14:09.825489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.650 qpair failed and we were unable to recover it. 00:26:15.650 [2024-12-10 04:14:09.835339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.650 [2024-12-10 04:14:09.835458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.650 [2024-12-10 04:14:09.835487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.650 [2024-12-10 04:14:09.835504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.650 [2024-12-10 04:14:09.835516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.650 [2024-12-10 04:14:09.835555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.650 qpair failed and we were unable to recover it. 00:26:15.650 [2024-12-10 04:14:09.845343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.650 [2024-12-10 04:14:09.845433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.650 [2024-12-10 04:14:09.845459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.650 [2024-12-10 04:14:09.845481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.650 [2024-12-10 04:14:09.845494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.650 [2024-12-10 04:14:09.845525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.650 qpair failed and we were unable to recover it. 00:26:15.650 [2024-12-10 04:14:09.855373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.650 [2024-12-10 04:14:09.855454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.650 [2024-12-10 04:14:09.855479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.650 [2024-12-10 04:14:09.855494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.650 [2024-12-10 04:14:09.855508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.650 [2024-12-10 04:14:09.855559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.650 qpair failed and we were unable to recover it. 00:26:15.650 [2024-12-10 04:14:09.865478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.650 [2024-12-10 04:14:09.865575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.650 [2024-12-10 04:14:09.865602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.650 [2024-12-10 04:14:09.865616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.650 [2024-12-10 04:14:09.865629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.650 [2024-12-10 04:14:09.865660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.650 qpair failed and we were unable to recover it. 00:26:15.650 [2024-12-10 04:14:09.875463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.650 [2024-12-10 04:14:09.875579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.650 [2024-12-10 04:14:09.875607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.650 [2024-12-10 04:14:09.875622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.650 [2024-12-10 04:14:09.875634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.650 [2024-12-10 04:14:09.875664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.650 qpair failed and we were unable to recover it. 00:26:15.650 [2024-12-10 04:14:09.885443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.650 [2024-12-10 04:14:09.885521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.650 [2024-12-10 04:14:09.885555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.650 [2024-12-10 04:14:09.885571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.650 [2024-12-10 04:14:09.885583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.650 [2024-12-10 04:14:09.885619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.650 qpair failed and we were unable to recover it. 00:26:15.650 [2024-12-10 04:14:09.895506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.650 [2024-12-10 04:14:09.895603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.650 [2024-12-10 04:14:09.895629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.650 [2024-12-10 04:14:09.895643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.650 [2024-12-10 04:14:09.895655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.650 [2024-12-10 04:14:09.895684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.650 qpair failed and we were unable to recover it. 00:26:15.650 [2024-12-10 04:14:09.905521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.650 [2024-12-10 04:14:09.905617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.650 [2024-12-10 04:14:09.905644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.650 [2024-12-10 04:14:09.905659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.650 [2024-12-10 04:14:09.905670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.650 [2024-12-10 04:14:09.905713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.650 qpair failed and we were unable to recover it. 00:26:15.650 [2024-12-10 04:14:09.915555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.650 [2024-12-10 04:14:09.915653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.650 [2024-12-10 04:14:09.915679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.650 [2024-12-10 04:14:09.915694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.650 [2024-12-10 04:14:09.915706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.650 [2024-12-10 04:14:09.915736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.650 qpair failed and we were unable to recover it. 00:26:15.650 [2024-12-10 04:14:09.925581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.650 [2024-12-10 04:14:09.925661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.650 [2024-12-10 04:14:09.925687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.650 [2024-12-10 04:14:09.925700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.650 [2024-12-10 04:14:09.925712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.650 [2024-12-10 04:14:09.925742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.650 qpair failed and we were unable to recover it. 00:26:15.650 [2024-12-10 04:14:09.935642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.650 [2024-12-10 04:14:09.935744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.650 [2024-12-10 04:14:09.935770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.650 [2024-12-10 04:14:09.935784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.650 [2024-12-10 04:14:09.935796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.650 [2024-12-10 04:14:09.935825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.650 qpair failed and we were unable to recover it. 00:26:15.650 [2024-12-10 04:14:09.945646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.650 [2024-12-10 04:14:09.945755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.650 [2024-12-10 04:14:09.945781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.650 [2024-12-10 04:14:09.945795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.650 [2024-12-10 04:14:09.945806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.650 [2024-12-10 04:14:09.945836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.650 qpair failed and we were unable to recover it. 00:26:15.650 [2024-12-10 04:14:09.955680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.651 [2024-12-10 04:14:09.955817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.651 [2024-12-10 04:14:09.955844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.651 [2024-12-10 04:14:09.955858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.651 [2024-12-10 04:14:09.955870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.651 [2024-12-10 04:14:09.955899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.651 qpair failed and we were unable to recover it. 00:26:15.651 [2024-12-10 04:14:09.965702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.651 [2024-12-10 04:14:09.965830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.651 [2024-12-10 04:14:09.965857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.651 [2024-12-10 04:14:09.965871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.651 [2024-12-10 04:14:09.965882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.651 [2024-12-10 04:14:09.965912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.651 qpair failed and we were unable to recover it. 00:26:15.651 [2024-12-10 04:14:09.975725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.651 [2024-12-10 04:14:09.975804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.651 [2024-12-10 04:14:09.975835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.651 [2024-12-10 04:14:09.975850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.651 [2024-12-10 04:14:09.975862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.651 [2024-12-10 04:14:09.975891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.651 qpair failed and we were unable to recover it. 00:26:15.651 [2024-12-10 04:14:09.985740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.651 [2024-12-10 04:14:09.985828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.651 [2024-12-10 04:14:09.985854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.651 [2024-12-10 04:14:09.985868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.651 [2024-12-10 04:14:09.985880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.651 [2024-12-10 04:14:09.985910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.651 qpair failed and we were unable to recover it. 00:26:15.651 [2024-12-10 04:14:09.995785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.651 [2024-12-10 04:14:09.995887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.651 [2024-12-10 04:14:09.995913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.651 [2024-12-10 04:14:09.995927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.651 [2024-12-10 04:14:09.995939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.651 [2024-12-10 04:14:09.995969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.651 qpair failed and we were unable to recover it. 00:26:15.651 [2024-12-10 04:14:10.005911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.651 [2024-12-10 04:14:10.006076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.651 [2024-12-10 04:14:10.006111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.651 [2024-12-10 04:14:10.006132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.651 [2024-12-10 04:14:10.006150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.651 [2024-12-10 04:14:10.006198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.651 qpair failed and we were unable to recover it. 00:26:15.651 [2024-12-10 04:14:10.015864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.651 [2024-12-10 04:14:10.015954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.651 [2024-12-10 04:14:10.015982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.651 [2024-12-10 04:14:10.015997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.651 [2024-12-10 04:14:10.016015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.651 [2024-12-10 04:14:10.016047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.651 qpair failed and we were unable to recover it. 00:26:15.651 [2024-12-10 04:14:10.025939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.651 [2024-12-10 04:14:10.026038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.651 [2024-12-10 04:14:10.026064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.651 [2024-12-10 04:14:10.026078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.651 [2024-12-10 04:14:10.026090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.651 [2024-12-10 04:14:10.026121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.651 qpair failed and we were unable to recover it. 00:26:15.913 [2024-12-10 04:14:10.035930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.913 [2024-12-10 04:14:10.036062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.913 [2024-12-10 04:14:10.036088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.913 [2024-12-10 04:14:10.036103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.913 [2024-12-10 04:14:10.036115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.913 [2024-12-10 04:14:10.036145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.913 qpair failed and we were unable to recover it. 00:26:15.913 [2024-12-10 04:14:10.045977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.913 [2024-12-10 04:14:10.046112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.913 [2024-12-10 04:14:10.046140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.913 [2024-12-10 04:14:10.046154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.913 [2024-12-10 04:14:10.046167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.913 [2024-12-10 04:14:10.046197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.913 qpair failed and we were unable to recover it. 00:26:15.913 [2024-12-10 04:14:10.056032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.913 [2024-12-10 04:14:10.056124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.913 [2024-12-10 04:14:10.056155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.913 [2024-12-10 04:14:10.056171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.913 [2024-12-10 04:14:10.056184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.913 [2024-12-10 04:14:10.056215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.913 qpair failed and we were unable to recover it. 00:26:15.913 [2024-12-10 04:14:10.066064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.913 [2024-12-10 04:14:10.066160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.913 [2024-12-10 04:14:10.066187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.913 [2024-12-10 04:14:10.066201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.913 [2024-12-10 04:14:10.066213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.913 [2024-12-10 04:14:10.066243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.913 qpair failed and we were unable to recover it. 00:26:15.913 [2024-12-10 04:14:10.076024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.913 [2024-12-10 04:14:10.076124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.913 [2024-12-10 04:14:10.076150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.913 [2024-12-10 04:14:10.076164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.913 [2024-12-10 04:14:10.076176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.913 [2024-12-10 04:14:10.076206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.913 qpair failed and we were unable to recover it. 00:26:15.913 [2024-12-10 04:14:10.086022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.913 [2024-12-10 04:14:10.086104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.913 [2024-12-10 04:14:10.086130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.913 [2024-12-10 04:14:10.086144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.913 [2024-12-10 04:14:10.086156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.913 [2024-12-10 04:14:10.086185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.913 qpair failed and we were unable to recover it. 00:26:15.913 [2024-12-10 04:14:10.096040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.913 [2024-12-10 04:14:10.096170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.913 [2024-12-10 04:14:10.096196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.913 [2024-12-10 04:14:10.096211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.913 [2024-12-10 04:14:10.096222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.913 [2024-12-10 04:14:10.096251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.913 qpair failed and we were unable to recover it. 00:26:15.913 [2024-12-10 04:14:10.106078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.913 [2024-12-10 04:14:10.106164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.913 [2024-12-10 04:14:10.106195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.913 [2024-12-10 04:14:10.106211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.913 [2024-12-10 04:14:10.106222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.913 [2024-12-10 04:14:10.106264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.913 qpair failed and we were unable to recover it. 00:26:15.913 [2024-12-10 04:14:10.116102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.913 [2024-12-10 04:14:10.116207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.913 [2024-12-10 04:14:10.116238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.913 [2024-12-10 04:14:10.116261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.913 [2024-12-10 04:14:10.116282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.913 [2024-12-10 04:14:10.116330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.913 qpair failed and we were unable to recover it. 00:26:15.913 [2024-12-10 04:14:10.126138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.913 [2024-12-10 04:14:10.126230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.913 [2024-12-10 04:14:10.126259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.913 [2024-12-10 04:14:10.126283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.913 [2024-12-10 04:14:10.126298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.913 [2024-12-10 04:14:10.126329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.913 qpair failed and we were unable to recover it. 00:26:15.913 [2024-12-10 04:14:10.136185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.914 [2024-12-10 04:14:10.136293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.914 [2024-12-10 04:14:10.136320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.914 [2024-12-10 04:14:10.136341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.914 [2024-12-10 04:14:10.136353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.914 [2024-12-10 04:14:10.136384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.914 qpair failed and we were unable to recover it. 00:26:15.914 [2024-12-10 04:14:10.146248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.914 [2024-12-10 04:14:10.146342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.914 [2024-12-10 04:14:10.146369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.914 [2024-12-10 04:14:10.146384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.914 [2024-12-10 04:14:10.146402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.914 [2024-12-10 04:14:10.146445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.914 qpair failed and we were unable to recover it. 00:26:15.914 [2024-12-10 04:14:10.156275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.914 [2024-12-10 04:14:10.156378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.914 [2024-12-10 04:14:10.156405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.914 [2024-12-10 04:14:10.156419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.914 [2024-12-10 04:14:10.156431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.914 [2024-12-10 04:14:10.156476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.914 qpair failed and we were unable to recover it. 00:26:15.914 [2024-12-10 04:14:10.166269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.914 [2024-12-10 04:14:10.166354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.914 [2024-12-10 04:14:10.166380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.914 [2024-12-10 04:14:10.166394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.914 [2024-12-10 04:14:10.166405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.914 [2024-12-10 04:14:10.166436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.914 qpair failed and we were unable to recover it. 00:26:15.914 [2024-12-10 04:14:10.176281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.914 [2024-12-10 04:14:10.176380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.914 [2024-12-10 04:14:10.176409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.914 [2024-12-10 04:14:10.176427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.914 [2024-12-10 04:14:10.176439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.914 [2024-12-10 04:14:10.176484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.914 qpair failed and we were unable to recover it. 00:26:15.914 [2024-12-10 04:14:10.186320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.914 [2024-12-10 04:14:10.186416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.914 [2024-12-10 04:14:10.186443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.914 [2024-12-10 04:14:10.186458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.914 [2024-12-10 04:14:10.186469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.914 [2024-12-10 04:14:10.186512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.914 qpair failed and we were unable to recover it. 00:26:15.914 [2024-12-10 04:14:10.196321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.914 [2024-12-10 04:14:10.196408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.914 [2024-12-10 04:14:10.196434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.914 [2024-12-10 04:14:10.196448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.914 [2024-12-10 04:14:10.196460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.914 [2024-12-10 04:14:10.196489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.914 qpair failed and we were unable to recover it. 00:26:15.914 [2024-12-10 04:14:10.206381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.914 [2024-12-10 04:14:10.206471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.914 [2024-12-10 04:14:10.206497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.914 [2024-12-10 04:14:10.206511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.914 [2024-12-10 04:14:10.206523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.914 [2024-12-10 04:14:10.206559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.914 qpair failed and we were unable to recover it. 00:26:15.914 [2024-12-10 04:14:10.216418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.914 [2024-12-10 04:14:10.216506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.914 [2024-12-10 04:14:10.216532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.914 [2024-12-10 04:14:10.216557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.914 [2024-12-10 04:14:10.216573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.914 [2024-12-10 04:14:10.216605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.914 qpair failed and we were unable to recover it. 00:26:15.914 [2024-12-10 04:14:10.226433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.914 [2024-12-10 04:14:10.226529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.914 [2024-12-10 04:14:10.226567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.914 [2024-12-10 04:14:10.226583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.914 [2024-12-10 04:14:10.226600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.914 [2024-12-10 04:14:10.226642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.914 qpair failed and we were unable to recover it. 00:26:15.914 [2024-12-10 04:14:10.236452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.914 [2024-12-10 04:14:10.236563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.914 [2024-12-10 04:14:10.236593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.914 [2024-12-10 04:14:10.236608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.914 [2024-12-10 04:14:10.236620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.914 [2024-12-10 04:14:10.236649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.914 qpair failed and we were unable to recover it. 00:26:15.914 [2024-12-10 04:14:10.246516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.914 [2024-12-10 04:14:10.246666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.914 [2024-12-10 04:14:10.246692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.914 [2024-12-10 04:14:10.246706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.914 [2024-12-10 04:14:10.246718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.914 [2024-12-10 04:14:10.246747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.914 qpair failed and we were unable to recover it. 00:26:15.914 [2024-12-10 04:14:10.256524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.914 [2024-12-10 04:14:10.256636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.914 [2024-12-10 04:14:10.256662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.914 [2024-12-10 04:14:10.256676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.914 [2024-12-10 04:14:10.256688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.914 [2024-12-10 04:14:10.256717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.914 qpair failed and we were unable to recover it. 00:26:15.914 [2024-12-10 04:14:10.266614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.914 [2024-12-10 04:14:10.266752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.914 [2024-12-10 04:14:10.266781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.914 [2024-12-10 04:14:10.266795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.915 [2024-12-10 04:14:10.266807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.915 [2024-12-10 04:14:10.266837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.915 qpair failed and we were unable to recover it. 00:26:15.915 [2024-12-10 04:14:10.276602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.915 [2024-12-10 04:14:10.276718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.915 [2024-12-10 04:14:10.276747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.915 [2024-12-10 04:14:10.276770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.915 [2024-12-10 04:14:10.276783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.915 [2024-12-10 04:14:10.276814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.915 qpair failed and we were unable to recover it. 00:26:15.915 [2024-12-10 04:14:10.286583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.915 [2024-12-10 04:14:10.286670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.915 [2024-12-10 04:14:10.286697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.915 [2024-12-10 04:14:10.286711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.915 [2024-12-10 04:14:10.286723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:15.915 [2024-12-10 04:14:10.286765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.915 qpair failed and we were unable to recover it. 00:26:16.175 [2024-12-10 04:14:10.296622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.175 [2024-12-10 04:14:10.296714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.175 [2024-12-10 04:14:10.296740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.175 [2024-12-10 04:14:10.296754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.175 [2024-12-10 04:14:10.296766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.175 [2024-12-10 04:14:10.296796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.175 qpair failed and we were unable to recover it. 00:26:16.175 [2024-12-10 04:14:10.306651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.175 [2024-12-10 04:14:10.306747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.175 [2024-12-10 04:14:10.306773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.175 [2024-12-10 04:14:10.306788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.175 [2024-12-10 04:14:10.306799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.175 [2024-12-10 04:14:10.306829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.175 qpair failed and we were unable to recover it. 00:26:16.175 [2024-12-10 04:14:10.316661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.175 [2024-12-10 04:14:10.316752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.175 [2024-12-10 04:14:10.316778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.175 [2024-12-10 04:14:10.316792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.175 [2024-12-10 04:14:10.316804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.175 [2024-12-10 04:14:10.316841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.175 qpair failed and we were unable to recover it. 00:26:16.175 [2024-12-10 04:14:10.326692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.175 [2024-12-10 04:14:10.326778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.175 [2024-12-10 04:14:10.326804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.175 [2024-12-10 04:14:10.326817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.175 [2024-12-10 04:14:10.326829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.175 [2024-12-10 04:14:10.326859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.175 qpair failed and we were unable to recover it. 00:26:16.175 [2024-12-10 04:14:10.336739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.175 [2024-12-10 04:14:10.336828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.175 [2024-12-10 04:14:10.336854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.175 [2024-12-10 04:14:10.336868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.175 [2024-12-10 04:14:10.336880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.175 [2024-12-10 04:14:10.336909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.175 qpair failed and we were unable to recover it. 00:26:16.175 [2024-12-10 04:14:10.346782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.175 [2024-12-10 04:14:10.346874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.175 [2024-12-10 04:14:10.346899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.175 [2024-12-10 04:14:10.346913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.175 [2024-12-10 04:14:10.346925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.175 [2024-12-10 04:14:10.346954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.175 qpair failed and we were unable to recover it. 00:26:16.175 [2024-12-10 04:14:10.356808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.175 [2024-12-10 04:14:10.356908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.175 [2024-12-10 04:14:10.356934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.175 [2024-12-10 04:14:10.356949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.175 [2024-12-10 04:14:10.356961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.175 [2024-12-10 04:14:10.356990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.175 qpair failed and we were unable to recover it. 00:26:16.175 [2024-12-10 04:14:10.366822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.175 [2024-12-10 04:14:10.366931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.175 [2024-12-10 04:14:10.366963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.175 [2024-12-10 04:14:10.366980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.175 [2024-12-10 04:14:10.366992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.175 [2024-12-10 04:14:10.367023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.175 qpair failed and we were unable to recover it. 00:26:16.175 [2024-12-10 04:14:10.376926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.175 [2024-12-10 04:14:10.377035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.175 [2024-12-10 04:14:10.377062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.175 [2024-12-10 04:14:10.377076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.175 [2024-12-10 04:14:10.377089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.175 [2024-12-10 04:14:10.377120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.175 qpair failed and we were unable to recover it. 00:26:16.175 [2024-12-10 04:14:10.386889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.175 [2024-12-10 04:14:10.386990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.175 [2024-12-10 04:14:10.387016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.175 [2024-12-10 04:14:10.387030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.175 [2024-12-10 04:14:10.387042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.175 [2024-12-10 04:14:10.387071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.175 qpair failed and we were unable to recover it. 00:26:16.175 [2024-12-10 04:14:10.396883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.175 [2024-12-10 04:14:10.396977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.175 [2024-12-10 04:14:10.397003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.175 [2024-12-10 04:14:10.397017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.175 [2024-12-10 04:14:10.397029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.175 [2024-12-10 04:14:10.397058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.175 qpair failed and we were unable to recover it. 00:26:16.175 [2024-12-10 04:14:10.406936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.176 [2024-12-10 04:14:10.407016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.176 [2024-12-10 04:14:10.407042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.176 [2024-12-10 04:14:10.407062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.176 [2024-12-10 04:14:10.407074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.176 [2024-12-10 04:14:10.407104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.176 qpair failed and we were unable to recover it. 00:26:16.176 [2024-12-10 04:14:10.416949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.176 [2024-12-10 04:14:10.417036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.176 [2024-12-10 04:14:10.417061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.176 [2024-12-10 04:14:10.417075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.176 [2024-12-10 04:14:10.417087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.176 [2024-12-10 04:14:10.417116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.176 qpair failed and we were unable to recover it. 00:26:16.176 [2024-12-10 04:14:10.427005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.176 [2024-12-10 04:14:10.427098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.176 [2024-12-10 04:14:10.427124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.176 [2024-12-10 04:14:10.427138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.176 [2024-12-10 04:14:10.427150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.176 [2024-12-10 04:14:10.427179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.176 qpair failed and we were unable to recover it. 00:26:16.176 [2024-12-10 04:14:10.437006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.176 [2024-12-10 04:14:10.437094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.176 [2024-12-10 04:14:10.437120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.176 [2024-12-10 04:14:10.437134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.176 [2024-12-10 04:14:10.437145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.176 [2024-12-10 04:14:10.437174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.176 qpair failed and we were unable to recover it. 00:26:16.176 [2024-12-10 04:14:10.447070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.176 [2024-12-10 04:14:10.447154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.176 [2024-12-10 04:14:10.447179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.176 [2024-12-10 04:14:10.447193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.176 [2024-12-10 04:14:10.447205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.176 [2024-12-10 04:14:10.447240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.176 qpair failed and we were unable to recover it. 00:26:16.176 [2024-12-10 04:14:10.457062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.176 [2024-12-10 04:14:10.457145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.176 [2024-12-10 04:14:10.457172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.176 [2024-12-10 04:14:10.457186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.176 [2024-12-10 04:14:10.457198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.176 [2024-12-10 04:14:10.457227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.176 qpair failed and we were unable to recover it. 00:26:16.176 [2024-12-10 04:14:10.467129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.176 [2024-12-10 04:14:10.467224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.176 [2024-12-10 04:14:10.467251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.176 [2024-12-10 04:14:10.467265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.176 [2024-12-10 04:14:10.467276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.176 [2024-12-10 04:14:10.467306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.176 qpair failed and we were unable to recover it. 00:26:16.176 [2024-12-10 04:14:10.477137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.176 [2024-12-10 04:14:10.477221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.176 [2024-12-10 04:14:10.477246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.176 [2024-12-10 04:14:10.477260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.176 [2024-12-10 04:14:10.477271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.176 [2024-12-10 04:14:10.477301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.176 qpair failed and we were unable to recover it. 00:26:16.176 [2024-12-10 04:14:10.487180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.176 [2024-12-10 04:14:10.487267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.176 [2024-12-10 04:14:10.487293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.176 [2024-12-10 04:14:10.487307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.176 [2024-12-10 04:14:10.487319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.176 [2024-12-10 04:14:10.487348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.176 qpair failed and we were unable to recover it. 00:26:16.176 [2024-12-10 04:14:10.497201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.176 [2024-12-10 04:14:10.497289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.176 [2024-12-10 04:14:10.497314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.176 [2024-12-10 04:14:10.497327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.176 [2024-12-10 04:14:10.497339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.176 [2024-12-10 04:14:10.497369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.176 qpair failed and we were unable to recover it. 00:26:16.176 [2024-12-10 04:14:10.507250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.176 [2024-12-10 04:14:10.507363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.176 [2024-12-10 04:14:10.507392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.176 [2024-12-10 04:14:10.507408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.176 [2024-12-10 04:14:10.507421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.176 [2024-12-10 04:14:10.507451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.176 qpair failed and we were unable to recover it. 00:26:16.176 [2024-12-10 04:14:10.517258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.176 [2024-12-10 04:14:10.517338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.176 [2024-12-10 04:14:10.517364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.176 [2024-12-10 04:14:10.517379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.176 [2024-12-10 04:14:10.517391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.176 [2024-12-10 04:14:10.517421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.176 qpair failed and we were unable to recover it. 00:26:16.176 [2024-12-10 04:14:10.527275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.176 [2024-12-10 04:14:10.527359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.176 [2024-12-10 04:14:10.527385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.176 [2024-12-10 04:14:10.527399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.176 [2024-12-10 04:14:10.527410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.176 [2024-12-10 04:14:10.527440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.176 qpair failed and we were unable to recover it. 00:26:16.176 [2024-12-10 04:14:10.537315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.176 [2024-12-10 04:14:10.537404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.176 [2024-12-10 04:14:10.537434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.177 [2024-12-10 04:14:10.537450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.177 [2024-12-10 04:14:10.537461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.177 [2024-12-10 04:14:10.537491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.177 qpair failed and we were unable to recover it. 00:26:16.177 [2024-12-10 04:14:10.547331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.177 [2024-12-10 04:14:10.547429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.177 [2024-12-10 04:14:10.547455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.177 [2024-12-10 04:14:10.547469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.177 [2024-12-10 04:14:10.547481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.177 [2024-12-10 04:14:10.547511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.177 qpair failed and we were unable to recover it. 00:26:16.436 [2024-12-10 04:14:10.557366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.436 [2024-12-10 04:14:10.557454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.436 [2024-12-10 04:14:10.557480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.436 [2024-12-10 04:14:10.557494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.436 [2024-12-10 04:14:10.557506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.436 [2024-12-10 04:14:10.557536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.436 qpair failed and we were unable to recover it. 00:26:16.436 [2024-12-10 04:14:10.567374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.436 [2024-12-10 04:14:10.567456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.436 [2024-12-10 04:14:10.567481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.436 [2024-12-10 04:14:10.567495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.436 [2024-12-10 04:14:10.567507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.436 [2024-12-10 04:14:10.567537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.436 qpair failed and we were unable to recover it. 00:26:16.436 [2024-12-10 04:14:10.577407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.436 [2024-12-10 04:14:10.577496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.436 [2024-12-10 04:14:10.577522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.436 [2024-12-10 04:14:10.577536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.436 [2024-12-10 04:14:10.577564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.436 [2024-12-10 04:14:10.577596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.436 qpair failed and we were unable to recover it. 00:26:16.436 [2024-12-10 04:14:10.587454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.436 [2024-12-10 04:14:10.587563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.436 [2024-12-10 04:14:10.587589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.436 [2024-12-10 04:14:10.587603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.436 [2024-12-10 04:14:10.587615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.436 [2024-12-10 04:14:10.587644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.436 qpair failed and we were unable to recover it. 00:26:16.436 [2024-12-10 04:14:10.597488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.436 [2024-12-10 04:14:10.597583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.436 [2024-12-10 04:14:10.597610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.436 [2024-12-10 04:14:10.597623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.436 [2024-12-10 04:14:10.597635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.436 [2024-12-10 04:14:10.597664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.436 qpair failed and we were unable to recover it. 00:26:16.436 [2024-12-10 04:14:10.607473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.436 [2024-12-10 04:14:10.607561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.436 [2024-12-10 04:14:10.607587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.436 [2024-12-10 04:14:10.607602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.436 [2024-12-10 04:14:10.607613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.436 [2024-12-10 04:14:10.607643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.436 qpair failed and we were unable to recover it. 00:26:16.436 [2024-12-10 04:14:10.617529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.437 [2024-12-10 04:14:10.617675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.437 [2024-12-10 04:14:10.617703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.437 [2024-12-10 04:14:10.617718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.437 [2024-12-10 04:14:10.617729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.437 [2024-12-10 04:14:10.617761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.437 qpair failed and we were unable to recover it. 00:26:16.437 [2024-12-10 04:14:10.627585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.437 [2024-12-10 04:14:10.627681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.437 [2024-12-10 04:14:10.627709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.437 [2024-12-10 04:14:10.627724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.437 [2024-12-10 04:14:10.627735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.437 [2024-12-10 04:14:10.627766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.437 qpair failed and we were unable to recover it. 00:26:16.437 [2024-12-10 04:14:10.637621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.437 [2024-12-10 04:14:10.637720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.437 [2024-12-10 04:14:10.637747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.437 [2024-12-10 04:14:10.637762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.437 [2024-12-10 04:14:10.637773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.437 [2024-12-10 04:14:10.637803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.437 qpair failed and we were unable to recover it. 00:26:16.437 [2024-12-10 04:14:10.647611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.437 [2024-12-10 04:14:10.647698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.437 [2024-12-10 04:14:10.647724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.437 [2024-12-10 04:14:10.647738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.437 [2024-12-10 04:14:10.647750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.437 [2024-12-10 04:14:10.647780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.437 qpair failed and we were unable to recover it. 00:26:16.437 [2024-12-10 04:14:10.657673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.437 [2024-12-10 04:14:10.657763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.437 [2024-12-10 04:14:10.657789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.437 [2024-12-10 04:14:10.657803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.437 [2024-12-10 04:14:10.657815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.437 [2024-12-10 04:14:10.657844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.437 qpair failed and we were unable to recover it. 00:26:16.437 [2024-12-10 04:14:10.667711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.437 [2024-12-10 04:14:10.667807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.437 [2024-12-10 04:14:10.667838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.437 [2024-12-10 04:14:10.667853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.437 [2024-12-10 04:14:10.667865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.437 [2024-12-10 04:14:10.667895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.437 qpair failed and we were unable to recover it. 00:26:16.437 [2024-12-10 04:14:10.677704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.437 [2024-12-10 04:14:10.677794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.437 [2024-12-10 04:14:10.677820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.437 [2024-12-10 04:14:10.677834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.437 [2024-12-10 04:14:10.677846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.437 [2024-12-10 04:14:10.677875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.437 qpair failed and we were unable to recover it. 00:26:16.437 [2024-12-10 04:14:10.687723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.437 [2024-12-10 04:14:10.687805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.437 [2024-12-10 04:14:10.687831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.437 [2024-12-10 04:14:10.687845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.437 [2024-12-10 04:14:10.687856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.437 [2024-12-10 04:14:10.687898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.437 qpair failed and we were unable to recover it. 00:26:16.437 [2024-12-10 04:14:10.697806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.437 [2024-12-10 04:14:10.697894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.437 [2024-12-10 04:14:10.697920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.437 [2024-12-10 04:14:10.697934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.437 [2024-12-10 04:14:10.697946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.437 [2024-12-10 04:14:10.697977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.437 qpair failed and we were unable to recover it. 00:26:16.437 [2024-12-10 04:14:10.707828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.437 [2024-12-10 04:14:10.707951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.437 [2024-12-10 04:14:10.707977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.437 [2024-12-10 04:14:10.707992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.437 [2024-12-10 04:14:10.708010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.437 [2024-12-10 04:14:10.708040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.437 qpair failed and we were unable to recover it. 00:26:16.437 [2024-12-10 04:14:10.717827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.437 [2024-12-10 04:14:10.717951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.437 [2024-12-10 04:14:10.717977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.437 [2024-12-10 04:14:10.717992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.437 [2024-12-10 04:14:10.718003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.437 [2024-12-10 04:14:10.718032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.437 qpair failed and we were unable to recover it. 00:26:16.437 [2024-12-10 04:14:10.727842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.437 [2024-12-10 04:14:10.727930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.437 [2024-12-10 04:14:10.727956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.437 [2024-12-10 04:14:10.727971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.437 [2024-12-10 04:14:10.727983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.437 [2024-12-10 04:14:10.728025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.437 qpair failed and we were unable to recover it. 00:26:16.437 [2024-12-10 04:14:10.737872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.437 [2024-12-10 04:14:10.737954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.437 [2024-12-10 04:14:10.737980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.437 [2024-12-10 04:14:10.737994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.437 [2024-12-10 04:14:10.738007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.437 [2024-12-10 04:14:10.738037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.437 qpair failed and we were unable to recover it. 00:26:16.437 [2024-12-10 04:14:10.747911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.437 [2024-12-10 04:14:10.748004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.437 [2024-12-10 04:14:10.748030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.437 [2024-12-10 04:14:10.748045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.438 [2024-12-10 04:14:10.748056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.438 [2024-12-10 04:14:10.748098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.438 qpair failed and we were unable to recover it. 00:26:16.438 [2024-12-10 04:14:10.757908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.438 [2024-12-10 04:14:10.757991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.438 [2024-12-10 04:14:10.758017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.438 [2024-12-10 04:14:10.758031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.438 [2024-12-10 04:14:10.758043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.438 [2024-12-10 04:14:10.758072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.438 qpair failed and we were unable to recover it. 00:26:16.438 [2024-12-10 04:14:10.767950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.438 [2024-12-10 04:14:10.768037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.438 [2024-12-10 04:14:10.768063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.438 [2024-12-10 04:14:10.768078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.438 [2024-12-10 04:14:10.768089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.438 [2024-12-10 04:14:10.768131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.438 qpair failed and we were unable to recover it. 00:26:16.438 [2024-12-10 04:14:10.778053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.438 [2024-12-10 04:14:10.778150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.438 [2024-12-10 04:14:10.778176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.438 [2024-12-10 04:14:10.778190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.438 [2024-12-10 04:14:10.778202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.438 [2024-12-10 04:14:10.778232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.438 qpair failed and we were unable to recover it. 00:26:16.438 [2024-12-10 04:14:10.788065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.438 [2024-12-10 04:14:10.788164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.438 [2024-12-10 04:14:10.788191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.438 [2024-12-10 04:14:10.788210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.438 [2024-12-10 04:14:10.788223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.438 [2024-12-10 04:14:10.788253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.438 qpair failed and we were unable to recover it. 00:26:16.438 [2024-12-10 04:14:10.798052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.438 [2024-12-10 04:14:10.798158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.438 [2024-12-10 04:14:10.798187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.438 [2024-12-10 04:14:10.798204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.438 [2024-12-10 04:14:10.798216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.438 [2024-12-10 04:14:10.798247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.438 qpair failed and we were unable to recover it. 00:26:16.438 [2024-12-10 04:14:10.808069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.438 [2024-12-10 04:14:10.808154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.438 [2024-12-10 04:14:10.808181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.438 [2024-12-10 04:14:10.808195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.438 [2024-12-10 04:14:10.808207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.438 [2024-12-10 04:14:10.808237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.438 qpair failed and we were unable to recover it. 00:26:16.697 [2024-12-10 04:14:10.818109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.697 [2024-12-10 04:14:10.818201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.697 [2024-12-10 04:14:10.818227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.697 [2024-12-10 04:14:10.818242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.697 [2024-12-10 04:14:10.818253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.697 [2024-12-10 04:14:10.818283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.697 qpair failed and we were unable to recover it. 00:26:16.697 [2024-12-10 04:14:10.828141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.697 [2024-12-10 04:14:10.828265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.697 [2024-12-10 04:14:10.828291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.697 [2024-12-10 04:14:10.828305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.697 [2024-12-10 04:14:10.828316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.697 [2024-12-10 04:14:10.828346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.697 qpair failed and we were unable to recover it. 00:26:16.697 [2024-12-10 04:14:10.838212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.697 [2024-12-10 04:14:10.838316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.697 [2024-12-10 04:14:10.838345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.697 [2024-12-10 04:14:10.838368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.698 [2024-12-10 04:14:10.838381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.698 [2024-12-10 04:14:10.838412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.698 qpair failed and we were unable to recover it. 00:26:16.698 [2024-12-10 04:14:10.848192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.698 [2024-12-10 04:14:10.848279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.698 [2024-12-10 04:14:10.848307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.698 [2024-12-10 04:14:10.848324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.698 [2024-12-10 04:14:10.848336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.698 [2024-12-10 04:14:10.848366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.698 qpair failed and we were unable to recover it. 00:26:16.698 [2024-12-10 04:14:10.858218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.698 [2024-12-10 04:14:10.858300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.698 [2024-12-10 04:14:10.858326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.698 [2024-12-10 04:14:10.858340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.698 [2024-12-10 04:14:10.858352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.698 [2024-12-10 04:14:10.858382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.698 qpair failed and we were unable to recover it. 00:26:16.698 [2024-12-10 04:14:10.868261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.698 [2024-12-10 04:14:10.868376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.698 [2024-12-10 04:14:10.868405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.698 [2024-12-10 04:14:10.868420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.698 [2024-12-10 04:14:10.868432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.698 [2024-12-10 04:14:10.868463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.698 qpair failed and we were unable to recover it. 00:26:16.698 [2024-12-10 04:14:10.878267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.698 [2024-12-10 04:14:10.878354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.698 [2024-12-10 04:14:10.878382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.698 [2024-12-10 04:14:10.878396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.698 [2024-12-10 04:14:10.878407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.698 [2024-12-10 04:14:10.878443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.698 qpair failed and we were unable to recover it. 00:26:16.698 [2024-12-10 04:14:10.888290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.698 [2024-12-10 04:14:10.888378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.698 [2024-12-10 04:14:10.888403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.698 [2024-12-10 04:14:10.888417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.698 [2024-12-10 04:14:10.888429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.698 [2024-12-10 04:14:10.888459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.698 qpair failed and we were unable to recover it. 00:26:16.698 [2024-12-10 04:14:10.898326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.698 [2024-12-10 04:14:10.898406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.698 [2024-12-10 04:14:10.898432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.698 [2024-12-10 04:14:10.898446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.698 [2024-12-10 04:14:10.898458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.698 [2024-12-10 04:14:10.898500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.698 qpair failed and we were unable to recover it. 00:26:16.698 [2024-12-10 04:14:10.908381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.698 [2024-12-10 04:14:10.908517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.698 [2024-12-10 04:14:10.908543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.698 [2024-12-10 04:14:10.908567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.698 [2024-12-10 04:14:10.908579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.698 [2024-12-10 04:14:10.908609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.698 qpair failed and we were unable to recover it. 00:26:16.698 [2024-12-10 04:14:10.918365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.698 [2024-12-10 04:14:10.918462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.698 [2024-12-10 04:14:10.918488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.698 [2024-12-10 04:14:10.918503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.698 [2024-12-10 04:14:10.918514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.698 [2024-12-10 04:14:10.918552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.698 qpair failed and we were unable to recover it. 00:26:16.698 [2024-12-10 04:14:10.928418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.698 [2024-12-10 04:14:10.928505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.698 [2024-12-10 04:14:10.928530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.698 [2024-12-10 04:14:10.928551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.698 [2024-12-10 04:14:10.928565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.698 [2024-12-10 04:14:10.928594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.698 qpair failed and we were unable to recover it. 00:26:16.698 [2024-12-10 04:14:10.938459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.698 [2024-12-10 04:14:10.938556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.698 [2024-12-10 04:14:10.938582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.698 [2024-12-10 04:14:10.938596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.698 [2024-12-10 04:14:10.938608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.698 [2024-12-10 04:14:10.938637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.698 qpair failed and we were unable to recover it. 00:26:16.698 [2024-12-10 04:14:10.948466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.698 [2024-12-10 04:14:10.948585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.698 [2024-12-10 04:14:10.948611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.698 [2024-12-10 04:14:10.948625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.698 [2024-12-10 04:14:10.948636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.698 [2024-12-10 04:14:10.948666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.698 qpair failed and we were unable to recover it. 00:26:16.698 [2024-12-10 04:14:10.958566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.698 [2024-12-10 04:14:10.958653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.698 [2024-12-10 04:14:10.958680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.698 [2024-12-10 04:14:10.958694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.698 [2024-12-10 04:14:10.958705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.698 [2024-12-10 04:14:10.958735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.698 qpair failed and we were unable to recover it. 00:26:16.698 [2024-12-10 04:14:10.968519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.698 [2024-12-10 04:14:10.968619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.698 [2024-12-10 04:14:10.968646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.698 [2024-12-10 04:14:10.968671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.698 [2024-12-10 04:14:10.968684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.698 [2024-12-10 04:14:10.968713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.698 qpair failed and we were unable to recover it. 00:26:16.699 [2024-12-10 04:14:10.978579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.699 [2024-12-10 04:14:10.978666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.699 [2024-12-10 04:14:10.978691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.699 [2024-12-10 04:14:10.978705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.699 [2024-12-10 04:14:10.978716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.699 [2024-12-10 04:14:10.978746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.699 qpair failed and we were unable to recover it. 00:26:16.699 [2024-12-10 04:14:10.988590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.699 [2024-12-10 04:14:10.988702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.699 [2024-12-10 04:14:10.988731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.699 [2024-12-10 04:14:10.988745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.699 [2024-12-10 04:14:10.988756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.699 [2024-12-10 04:14:10.988786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.699 qpair failed and we were unable to recover it. 00:26:16.699 [2024-12-10 04:14:10.998601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.699 [2024-12-10 04:14:10.998690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.699 [2024-12-10 04:14:10.998716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.699 [2024-12-10 04:14:10.998730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.699 [2024-12-10 04:14:10.998742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.699 [2024-12-10 04:14:10.998772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.699 qpair failed and we were unable to recover it. 00:26:16.699 [2024-12-10 04:14:11.008644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.699 [2024-12-10 04:14:11.008731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.699 [2024-12-10 04:14:11.008757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.699 [2024-12-10 04:14:11.008770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.699 [2024-12-10 04:14:11.008782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.699 [2024-12-10 04:14:11.008820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.699 qpair failed and we were unable to recover it. 00:26:16.699 [2024-12-10 04:14:11.018667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.699 [2024-12-10 04:14:11.018756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.699 [2024-12-10 04:14:11.018782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.699 [2024-12-10 04:14:11.018796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.699 [2024-12-10 04:14:11.018808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.699 [2024-12-10 04:14:11.018837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.699 qpair failed and we were unable to recover it. 00:26:16.699 [2024-12-10 04:14:11.028708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.699 [2024-12-10 04:14:11.028801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.699 [2024-12-10 04:14:11.028826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.699 [2024-12-10 04:14:11.028840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.699 [2024-12-10 04:14:11.028852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.699 [2024-12-10 04:14:11.028882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.699 qpair failed and we were unable to recover it. 00:26:16.699 [2024-12-10 04:14:11.038739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.699 [2024-12-10 04:14:11.038822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.699 [2024-12-10 04:14:11.038847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.699 [2024-12-10 04:14:11.038860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.699 [2024-12-10 04:14:11.038873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.699 [2024-12-10 04:14:11.038902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.699 qpair failed and we were unable to recover it. 00:26:16.699 [2024-12-10 04:14:11.048786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.699 [2024-12-10 04:14:11.048905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.699 [2024-12-10 04:14:11.048931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.699 [2024-12-10 04:14:11.048944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.699 [2024-12-10 04:14:11.048956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.699 [2024-12-10 04:14:11.048985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.699 qpair failed and we were unable to recover it. 00:26:16.699 [2024-12-10 04:14:11.058764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.699 [2024-12-10 04:14:11.058855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.699 [2024-12-10 04:14:11.058881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.699 [2024-12-10 04:14:11.058895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.699 [2024-12-10 04:14:11.058906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.699 [2024-12-10 04:14:11.058936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.699 qpair failed and we were unable to recover it. 00:26:16.699 [2024-12-10 04:14:11.068815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.699 [2024-12-10 04:14:11.068908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.699 [2024-12-10 04:14:11.068933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.699 [2024-12-10 04:14:11.068947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.699 [2024-12-10 04:14:11.068959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.699 [2024-12-10 04:14:11.068989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.699 qpair failed and we were unable to recover it. 00:26:16.699 [2024-12-10 04:14:11.078887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.699 [2024-12-10 04:14:11.078976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.699 [2024-12-10 04:14:11.079001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.959 [2024-12-10 04:14:11.079016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.959 [2024-12-10 04:14:11.079030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.959 [2024-12-10 04:14:11.079060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.959 qpair failed and we were unable to recover it. 00:26:16.959 [2024-12-10 04:14:11.088848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.959 [2024-12-10 04:14:11.088945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.959 [2024-12-10 04:14:11.088971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.959 [2024-12-10 04:14:11.088986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.959 [2024-12-10 04:14:11.088997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.959 [2024-12-10 04:14:11.089026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.959 qpair failed and we were unable to recover it. 00:26:16.959 [2024-12-10 04:14:11.098905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.959 [2024-12-10 04:14:11.098997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.959 [2024-12-10 04:14:11.099029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.959 [2024-12-10 04:14:11.099044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.959 [2024-12-10 04:14:11.099055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.959 [2024-12-10 04:14:11.099084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.959 qpair failed and we were unable to recover it. 00:26:16.959 [2024-12-10 04:14:11.108953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.959 [2024-12-10 04:14:11.109073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.959 [2024-12-10 04:14:11.109098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.959 [2024-12-10 04:14:11.109112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.959 [2024-12-10 04:14:11.109123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.959 [2024-12-10 04:14:11.109152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.959 qpair failed and we were unable to recover it. 00:26:16.959 [2024-12-10 04:14:11.118968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.959 [2024-12-10 04:14:11.119065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.959 [2024-12-10 04:14:11.119098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.959 [2024-12-10 04:14:11.119117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.959 [2024-12-10 04:14:11.119129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.959 [2024-12-10 04:14:11.119160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.959 qpair failed and we were unable to recover it. 00:26:16.959 [2024-12-10 04:14:11.128963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.959 [2024-12-10 04:14:11.129052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.959 [2024-12-10 04:14:11.129079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.959 [2024-12-10 04:14:11.129093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.959 [2024-12-10 04:14:11.129105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.959 [2024-12-10 04:14:11.129135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.959 qpair failed and we were unable to recover it. 00:26:16.959 [2024-12-10 04:14:11.138988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.959 [2024-12-10 04:14:11.139072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.959 [2024-12-10 04:14:11.139098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.959 [2024-12-10 04:14:11.139113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.959 [2024-12-10 04:14:11.139130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.959 [2024-12-10 04:14:11.139161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.959 qpair failed and we were unable to recover it. 00:26:16.959 [2024-12-10 04:14:11.149125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.959 [2024-12-10 04:14:11.149269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.959 [2024-12-10 04:14:11.149296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.959 [2024-12-10 04:14:11.149310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.959 [2024-12-10 04:14:11.149322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.959 [2024-12-10 04:14:11.149351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.959 qpair failed and we were unable to recover it. 00:26:16.959 [2024-12-10 04:14:11.159075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.959 [2024-12-10 04:14:11.159160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.959 [2024-12-10 04:14:11.159185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.959 [2024-12-10 04:14:11.159199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.959 [2024-12-10 04:14:11.159210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.959 [2024-12-10 04:14:11.159240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.959 qpair failed and we were unable to recover it. 00:26:16.959 [2024-12-10 04:14:11.169110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.959 [2024-12-10 04:14:11.169229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.959 [2024-12-10 04:14:11.169255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.959 [2024-12-10 04:14:11.169269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.959 [2024-12-10 04:14:11.169280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.960 [2024-12-10 04:14:11.169309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.960 qpair failed and we were unable to recover it. 00:26:16.960 [2024-12-10 04:14:11.179142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.960 [2024-12-10 04:14:11.179228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.960 [2024-12-10 04:14:11.179254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.960 [2024-12-10 04:14:11.179268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.960 [2024-12-10 04:14:11.179280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.960 [2024-12-10 04:14:11.179309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.960 qpair failed and we were unable to recover it. 00:26:16.960 [2024-12-10 04:14:11.189152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.960 [2024-12-10 04:14:11.189252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.960 [2024-12-10 04:14:11.189278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.960 [2024-12-10 04:14:11.189292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.960 [2024-12-10 04:14:11.189303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.960 [2024-12-10 04:14:11.189332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.960 qpair failed and we were unable to recover it. 00:26:16.960 [2024-12-10 04:14:11.199182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.960 [2024-12-10 04:14:11.199275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.960 [2024-12-10 04:14:11.199300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.960 [2024-12-10 04:14:11.199314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.960 [2024-12-10 04:14:11.199325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.960 [2024-12-10 04:14:11.199355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.960 qpair failed and we were unable to recover it. 00:26:16.960 [2024-12-10 04:14:11.209209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.960 [2024-12-10 04:14:11.209291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.960 [2024-12-10 04:14:11.209316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.960 [2024-12-10 04:14:11.209330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.960 [2024-12-10 04:14:11.209342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.960 [2024-12-10 04:14:11.209371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.960 qpair failed and we were unable to recover it. 00:26:16.960 [2024-12-10 04:14:11.219284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.960 [2024-12-10 04:14:11.219380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.960 [2024-12-10 04:14:11.219404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.960 [2024-12-10 04:14:11.219418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.960 [2024-12-10 04:14:11.219430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.960 [2024-12-10 04:14:11.219459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.960 qpair failed and we were unable to recover it. 00:26:16.960 [2024-12-10 04:14:11.229339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.960 [2024-12-10 04:14:11.229486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.960 [2024-12-10 04:14:11.229518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.960 [2024-12-10 04:14:11.229533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.960 [2024-12-10 04:14:11.229552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.960 [2024-12-10 04:14:11.229584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.960 qpair failed and we were unable to recover it. 00:26:16.960 [2024-12-10 04:14:11.239350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.960 [2024-12-10 04:14:11.239434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.960 [2024-12-10 04:14:11.239460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.960 [2024-12-10 04:14:11.239474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.960 [2024-12-10 04:14:11.239486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.960 [2024-12-10 04:14:11.239527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.960 qpair failed and we were unable to recover it. 00:26:16.960 [2024-12-10 04:14:11.249375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.960 [2024-12-10 04:14:11.249469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.960 [2024-12-10 04:14:11.249499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.960 [2024-12-10 04:14:11.249514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.960 [2024-12-10 04:14:11.249525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.960 [2024-12-10 04:14:11.249574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.960 qpair failed and we were unable to recover it. 00:26:16.960 [2024-12-10 04:14:11.259385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.960 [2024-12-10 04:14:11.259470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.960 [2024-12-10 04:14:11.259496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.960 [2024-12-10 04:14:11.259511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.960 [2024-12-10 04:14:11.259523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.960 [2024-12-10 04:14:11.259559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.960 qpair failed and we were unable to recover it. 00:26:16.960 [2024-12-10 04:14:11.269372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.960 [2024-12-10 04:14:11.269479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.960 [2024-12-10 04:14:11.269505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.960 [2024-12-10 04:14:11.269520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.960 [2024-12-10 04:14:11.269557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.960 [2024-12-10 04:14:11.269589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.960 qpair failed and we were unable to recover it. 00:26:16.960 [2024-12-10 04:14:11.279384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.960 [2024-12-10 04:14:11.279470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.960 [2024-12-10 04:14:11.279495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.960 [2024-12-10 04:14:11.279509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.960 [2024-12-10 04:14:11.279521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.960 [2024-12-10 04:14:11.279557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.960 qpair failed and we were unable to recover it. 00:26:16.960 [2024-12-10 04:14:11.289425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.960 [2024-12-10 04:14:11.289511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.960 [2024-12-10 04:14:11.289537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.960 [2024-12-10 04:14:11.289559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.960 [2024-12-10 04:14:11.289572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.960 [2024-12-10 04:14:11.289601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.960 qpair failed and we were unable to recover it. 00:26:16.960 [2024-12-10 04:14:11.299452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.960 [2024-12-10 04:14:11.299538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.960 [2024-12-10 04:14:11.299571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.960 [2024-12-10 04:14:11.299585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.960 [2024-12-10 04:14:11.299597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.960 [2024-12-10 04:14:11.299627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.960 qpair failed and we were unable to recover it. 00:26:16.960 [2024-12-10 04:14:11.309500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.961 [2024-12-10 04:14:11.309619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.961 [2024-12-10 04:14:11.309645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.961 [2024-12-10 04:14:11.309659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.961 [2024-12-10 04:14:11.309671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.961 [2024-12-10 04:14:11.309701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.961 qpair failed and we were unable to recover it. 00:26:16.961 [2024-12-10 04:14:11.319520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.961 [2024-12-10 04:14:11.319637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.961 [2024-12-10 04:14:11.319666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.961 [2024-12-10 04:14:11.319683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.961 [2024-12-10 04:14:11.319695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.961 [2024-12-10 04:14:11.319738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.961 qpair failed and we were unable to recover it. 00:26:16.961 [2024-12-10 04:14:11.329527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.961 [2024-12-10 04:14:11.329673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.961 [2024-12-10 04:14:11.329700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.961 [2024-12-10 04:14:11.329714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.961 [2024-12-10 04:14:11.329726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.961 [2024-12-10 04:14:11.329757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.961 qpair failed and we were unable to recover it. 00:26:16.961 [2024-12-10 04:14:11.339628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.961 [2024-12-10 04:14:11.339713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.961 [2024-12-10 04:14:11.339738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.961 [2024-12-10 04:14:11.339752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.961 [2024-12-10 04:14:11.339763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:16.961 [2024-12-10 04:14:11.339793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.961 qpair failed and we were unable to recover it. 00:26:17.222 [2024-12-10 04:14:11.349614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.222 [2024-12-10 04:14:11.349715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.222 [2024-12-10 04:14:11.349741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.222 [2024-12-10 04:14:11.349755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.222 [2024-12-10 04:14:11.349767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.222 [2024-12-10 04:14:11.349797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.222 qpair failed and we were unable to recover it. 00:26:17.222 [2024-12-10 04:14:11.359640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.222 [2024-12-10 04:14:11.359760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.222 [2024-12-10 04:14:11.359786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.222 [2024-12-10 04:14:11.359800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.222 [2024-12-10 04:14:11.359811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.222 [2024-12-10 04:14:11.359841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.222 qpair failed and we were unable to recover it. 00:26:17.222 [2024-12-10 04:14:11.369653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.222 [2024-12-10 04:14:11.369745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.222 [2024-12-10 04:14:11.369780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.222 [2024-12-10 04:14:11.369799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.222 [2024-12-10 04:14:11.369811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.222 [2024-12-10 04:14:11.369843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.222 qpair failed and we were unable to recover it. 00:26:17.222 [2024-12-10 04:14:11.379678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.222 [2024-12-10 04:14:11.379768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.222 [2024-12-10 04:14:11.379797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.222 [2024-12-10 04:14:11.379814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.222 [2024-12-10 04:14:11.379827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.222 [2024-12-10 04:14:11.379857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.222 qpair failed and we were unable to recover it. 00:26:17.222 [2024-12-10 04:14:11.389750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.222 [2024-12-10 04:14:11.389877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.222 [2024-12-10 04:14:11.389904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.222 [2024-12-10 04:14:11.389918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.222 [2024-12-10 04:14:11.389930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.222 [2024-12-10 04:14:11.389959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.222 qpair failed and we were unable to recover it. 00:26:17.222 [2024-12-10 04:14:11.399777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.222 [2024-12-10 04:14:11.399868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.222 [2024-12-10 04:14:11.399894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.222 [2024-12-10 04:14:11.399914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.222 [2024-12-10 04:14:11.399926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.222 [2024-12-10 04:14:11.399956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.222 qpair failed and we were unable to recover it. 00:26:17.222 [2024-12-10 04:14:11.409759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.222 [2024-12-10 04:14:11.409856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.222 [2024-12-10 04:14:11.409882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.223 [2024-12-10 04:14:11.409896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.223 [2024-12-10 04:14:11.409908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.223 [2024-12-10 04:14:11.409937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.223 qpair failed and we were unable to recover it. 00:26:17.223 [2024-12-10 04:14:11.419788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.223 [2024-12-10 04:14:11.419872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.223 [2024-12-10 04:14:11.419898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.223 [2024-12-10 04:14:11.419912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.223 [2024-12-10 04:14:11.419924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.223 [2024-12-10 04:14:11.419954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.223 qpair failed and we were unable to recover it. 00:26:17.223 [2024-12-10 04:14:11.429848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.223 [2024-12-10 04:14:11.429944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.223 [2024-12-10 04:14:11.429970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.223 [2024-12-10 04:14:11.429984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.223 [2024-12-10 04:14:11.429995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.223 [2024-12-10 04:14:11.430025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.223 qpair failed and we were unable to recover it. 00:26:17.223 [2024-12-10 04:14:11.439847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.223 [2024-12-10 04:14:11.439930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.223 [2024-12-10 04:14:11.439956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.223 [2024-12-10 04:14:11.439971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.223 [2024-12-10 04:14:11.439985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.223 [2024-12-10 04:14:11.440021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.223 qpair failed and we were unable to recover it. 00:26:17.223 [2024-12-10 04:14:11.449854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.223 [2024-12-10 04:14:11.449956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.223 [2024-12-10 04:14:11.449982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.223 [2024-12-10 04:14:11.449996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.223 [2024-12-10 04:14:11.450008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.223 [2024-12-10 04:14:11.450037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.223 qpair failed and we were unable to recover it. 00:26:17.223 [2024-12-10 04:14:11.459949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.223 [2024-12-10 04:14:11.460049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.223 [2024-12-10 04:14:11.460075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.223 [2024-12-10 04:14:11.460089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.223 [2024-12-10 04:14:11.460101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.223 [2024-12-10 04:14:11.460130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.223 qpair failed and we were unable to recover it. 00:26:17.223 [2024-12-10 04:14:11.469943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.223 [2024-12-10 04:14:11.470042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.223 [2024-12-10 04:14:11.470068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.223 [2024-12-10 04:14:11.470081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.223 [2024-12-10 04:14:11.470093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.223 [2024-12-10 04:14:11.470122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.223 qpair failed and we were unable to recover it. 00:26:17.223 [2024-12-10 04:14:11.479993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.223 [2024-12-10 04:14:11.480075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.223 [2024-12-10 04:14:11.480100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.223 [2024-12-10 04:14:11.480114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.223 [2024-12-10 04:14:11.480126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.223 [2024-12-10 04:14:11.480156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.223 qpair failed and we were unable to recover it. 00:26:17.223 [2024-12-10 04:14:11.490008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.223 [2024-12-10 04:14:11.490142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.223 [2024-12-10 04:14:11.490167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.223 [2024-12-10 04:14:11.490181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.223 [2024-12-10 04:14:11.490193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.223 [2024-12-10 04:14:11.490222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.223 qpair failed and we were unable to recover it. 00:26:17.223 [2024-12-10 04:14:11.500039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.223 [2024-12-10 04:14:11.500130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.223 [2024-12-10 04:14:11.500155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.223 [2024-12-10 04:14:11.500169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.223 [2024-12-10 04:14:11.500180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.223 [2024-12-10 04:14:11.500210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.223 qpair failed and we were unable to recover it. 00:26:17.223 [2024-12-10 04:14:11.510071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.223 [2024-12-10 04:14:11.510208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.223 [2024-12-10 04:14:11.510234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.223 [2024-12-10 04:14:11.510248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.223 [2024-12-10 04:14:11.510259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.223 [2024-12-10 04:14:11.510289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.223 qpair failed and we were unable to recover it. 00:26:17.223 [2024-12-10 04:14:11.520083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.223 [2024-12-10 04:14:11.520175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.223 [2024-12-10 04:14:11.520201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.223 [2024-12-10 04:14:11.520215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.223 [2024-12-10 04:14:11.520227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.223 [2024-12-10 04:14:11.520256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.223 qpair failed and we were unable to recover it. 00:26:17.223 [2024-12-10 04:14:11.530096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.223 [2024-12-10 04:14:11.530220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.223 [2024-12-10 04:14:11.530251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.223 [2024-12-10 04:14:11.530266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.223 [2024-12-10 04:14:11.530277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.223 [2024-12-10 04:14:11.530306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.223 qpair failed and we were unable to recover it. 00:26:17.223 [2024-12-10 04:14:11.540162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.223 [2024-12-10 04:14:11.540252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.223 [2024-12-10 04:14:11.540281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.223 [2024-12-10 04:14:11.540298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.223 [2024-12-10 04:14:11.540310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.223 [2024-12-10 04:14:11.540340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.223 qpair failed and we were unable to recover it. 00:26:17.224 [2024-12-10 04:14:11.550169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.224 [2024-12-10 04:14:11.550264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.224 [2024-12-10 04:14:11.550290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.224 [2024-12-10 04:14:11.550304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.224 [2024-12-10 04:14:11.550316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.224 [2024-12-10 04:14:11.550345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.224 qpair failed and we were unable to recover it. 00:26:17.224 [2024-12-10 04:14:11.560190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.224 [2024-12-10 04:14:11.560275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.224 [2024-12-10 04:14:11.560304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.224 [2024-12-10 04:14:11.560319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.224 [2024-12-10 04:14:11.560331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.224 [2024-12-10 04:14:11.560360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.224 qpair failed and we were unable to recover it. 00:26:17.224 [2024-12-10 04:14:11.570244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.224 [2024-12-10 04:14:11.570330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.224 [2024-12-10 04:14:11.570355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.224 [2024-12-10 04:14:11.570369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.224 [2024-12-10 04:14:11.570381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.224 [2024-12-10 04:14:11.570416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.224 qpair failed and we were unable to recover it. 00:26:17.224 [2024-12-10 04:14:11.580216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.224 [2024-12-10 04:14:11.580294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.224 [2024-12-10 04:14:11.580320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.224 [2024-12-10 04:14:11.580334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.224 [2024-12-10 04:14:11.580346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.224 [2024-12-10 04:14:11.580375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.224 qpair failed and we were unable to recover it. 00:26:17.224 [2024-12-10 04:14:11.590282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.224 [2024-12-10 04:14:11.590377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.224 [2024-12-10 04:14:11.590403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.224 [2024-12-10 04:14:11.590417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.224 [2024-12-10 04:14:11.590429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.224 [2024-12-10 04:14:11.590458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.224 qpair failed and we were unable to recover it. 00:26:17.224 [2024-12-10 04:14:11.600294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.224 [2024-12-10 04:14:11.600381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.224 [2024-12-10 04:14:11.600407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.224 [2024-12-10 04:14:11.600421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.224 [2024-12-10 04:14:11.600433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.224 [2024-12-10 04:14:11.600474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.224 qpair failed and we were unable to recover it. 00:26:17.484 [2024-12-10 04:14:11.610355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.484 [2024-12-10 04:14:11.610445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.484 [2024-12-10 04:14:11.610471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.484 [2024-12-10 04:14:11.610485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.484 [2024-12-10 04:14:11.610497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.484 [2024-12-10 04:14:11.610526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.484 qpair failed and we were unable to recover it. 00:26:17.484 [2024-12-10 04:14:11.620357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.484 [2024-12-10 04:14:11.620452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.484 [2024-12-10 04:14:11.620485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.484 [2024-12-10 04:14:11.620506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.484 [2024-12-10 04:14:11.620518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.484 [2024-12-10 04:14:11.620558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.484 qpair failed and we were unable to recover it. 00:26:17.484 [2024-12-10 04:14:11.630387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.484 [2024-12-10 04:14:11.630483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.484 [2024-12-10 04:14:11.630510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.484 [2024-12-10 04:14:11.630524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.484 [2024-12-10 04:14:11.630536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.484 [2024-12-10 04:14:11.630575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.484 qpair failed and we were unable to recover it. 00:26:17.484 [2024-12-10 04:14:11.640393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.484 [2024-12-10 04:14:11.640514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.484 [2024-12-10 04:14:11.640540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.484 [2024-12-10 04:14:11.640566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.484 [2024-12-10 04:14:11.640579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.484 [2024-12-10 04:14:11.640609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.484 qpair failed and we were unable to recover it. 00:26:17.484 [2024-12-10 04:14:11.650416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.484 [2024-12-10 04:14:11.650503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.484 [2024-12-10 04:14:11.650529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.484 [2024-12-10 04:14:11.650549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.484 [2024-12-10 04:14:11.650563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.484 [2024-12-10 04:14:11.650594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.484 qpair failed and we were unable to recover it. 00:26:17.484 [2024-12-10 04:14:11.660542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.484 [2024-12-10 04:14:11.660645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.485 [2024-12-10 04:14:11.660676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.485 [2024-12-10 04:14:11.660691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.485 [2024-12-10 04:14:11.660703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.485 [2024-12-10 04:14:11.660732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.485 qpair failed and we were unable to recover it. 00:26:17.485 [2024-12-10 04:14:11.670497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.485 [2024-12-10 04:14:11.670601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.485 [2024-12-10 04:14:11.670627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.485 [2024-12-10 04:14:11.670642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.485 [2024-12-10 04:14:11.670654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.485 [2024-12-10 04:14:11.670683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.485 qpair failed and we were unable to recover it. 00:26:17.485 [2024-12-10 04:14:11.680500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.485 [2024-12-10 04:14:11.680592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.485 [2024-12-10 04:14:11.680618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.485 [2024-12-10 04:14:11.680632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.485 [2024-12-10 04:14:11.680643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.485 [2024-12-10 04:14:11.680673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.485 qpair failed and we were unable to recover it. 00:26:17.485 [2024-12-10 04:14:11.690533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.485 [2024-12-10 04:14:11.690624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.485 [2024-12-10 04:14:11.690650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.485 [2024-12-10 04:14:11.690663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.485 [2024-12-10 04:14:11.690675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.485 [2024-12-10 04:14:11.690705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.485 qpair failed and we were unable to recover it. 00:26:17.485 [2024-12-10 04:14:11.700569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.485 [2024-12-10 04:14:11.700657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.485 [2024-12-10 04:14:11.700683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.485 [2024-12-10 04:14:11.700697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.485 [2024-12-10 04:14:11.700714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.485 [2024-12-10 04:14:11.700745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.485 qpair failed and we were unable to recover it. 00:26:17.485 [2024-12-10 04:14:11.710603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.485 [2024-12-10 04:14:11.710703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.485 [2024-12-10 04:14:11.710729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.485 [2024-12-10 04:14:11.710743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.485 [2024-12-10 04:14:11.710755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.485 [2024-12-10 04:14:11.710784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.485 qpair failed and we were unable to recover it. 00:26:17.485 [2024-12-10 04:14:11.720628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.485 [2024-12-10 04:14:11.720717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.485 [2024-12-10 04:14:11.720743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.485 [2024-12-10 04:14:11.720756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.485 [2024-12-10 04:14:11.720769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.485 [2024-12-10 04:14:11.720798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.485 qpair failed and we were unable to recover it. 00:26:17.485 [2024-12-10 04:14:11.730706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.485 [2024-12-10 04:14:11.730821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.485 [2024-12-10 04:14:11.730847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.485 [2024-12-10 04:14:11.730861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.485 [2024-12-10 04:14:11.730872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.485 [2024-12-10 04:14:11.730902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.485 qpair failed and we were unable to recover it. 00:26:17.485 [2024-12-10 04:14:11.740679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.485 [2024-12-10 04:14:11.740777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.485 [2024-12-10 04:14:11.740803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.485 [2024-12-10 04:14:11.740817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.485 [2024-12-10 04:14:11.740828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.485 [2024-12-10 04:14:11.740859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.485 qpair failed and we were unable to recover it. 00:26:17.485 [2024-12-10 04:14:11.750754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.485 [2024-12-10 04:14:11.750855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.485 [2024-12-10 04:14:11.750885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.485 [2024-12-10 04:14:11.750901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.485 [2024-12-10 04:14:11.750913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.485 [2024-12-10 04:14:11.750943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.485 qpair failed and we were unable to recover it. 00:26:17.485 [2024-12-10 04:14:11.760758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.485 [2024-12-10 04:14:11.760840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.485 [2024-12-10 04:14:11.760866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.485 [2024-12-10 04:14:11.760880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.485 [2024-12-10 04:14:11.760892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.485 [2024-12-10 04:14:11.760921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.485 qpair failed and we were unable to recover it. 00:26:17.485 [2024-12-10 04:14:11.770789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.485 [2024-12-10 04:14:11.770875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.485 [2024-12-10 04:14:11.770901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.485 [2024-12-10 04:14:11.770914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.485 [2024-12-10 04:14:11.770926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.485 [2024-12-10 04:14:11.770955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.485 qpair failed and we were unable to recover it. 00:26:17.485 [2024-12-10 04:14:11.780826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.485 [2024-12-10 04:14:11.780913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.485 [2024-12-10 04:14:11.780942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.485 [2024-12-10 04:14:11.780956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.485 [2024-12-10 04:14:11.780968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.485 [2024-12-10 04:14:11.780998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.485 qpair failed and we were unable to recover it. 00:26:17.485 [2024-12-10 04:14:11.790872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.485 [2024-12-10 04:14:11.790969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.485 [2024-12-10 04:14:11.791000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.485 [2024-12-10 04:14:11.791015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-10 04:14:11.791027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.486 [2024-12-10 04:14:11.791056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-10 04:14:11.800870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.486 [2024-12-10 04:14:11.801002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.486 [2024-12-10 04:14:11.801028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.486 [2024-12-10 04:14:11.801042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-10 04:14:11.801054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.486 [2024-12-10 04:14:11.801083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-10 04:14:11.810961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.486 [2024-12-10 04:14:11.811071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.486 [2024-12-10 04:14:11.811097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.486 [2024-12-10 04:14:11.811111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-10 04:14:11.811122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.486 [2024-12-10 04:14:11.811165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-10 04:14:11.820905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.486 [2024-12-10 04:14:11.821018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.486 [2024-12-10 04:14:11.821045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.486 [2024-12-10 04:14:11.821059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-10 04:14:11.821071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.486 [2024-12-10 04:14:11.821100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-10 04:14:11.830961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.486 [2024-12-10 04:14:11.831058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.486 [2024-12-10 04:14:11.831083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.486 [2024-12-10 04:14:11.831103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-10 04:14:11.831115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.486 [2024-12-10 04:14:11.831145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-10 04:14:11.840974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.486 [2024-12-10 04:14:11.841061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.486 [2024-12-10 04:14:11.841087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.486 [2024-12-10 04:14:11.841103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-10 04:14:11.841115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.486 [2024-12-10 04:14:11.841145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-10 04:14:11.851006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.486 [2024-12-10 04:14:11.851135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.486 [2024-12-10 04:14:11.851161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.486 [2024-12-10 04:14:11.851176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-10 04:14:11.851187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.486 [2024-12-10 04:14:11.851216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-10 04:14:11.861079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.486 [2024-12-10 04:14:11.861185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.486 [2024-12-10 04:14:11.861212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.486 [2024-12-10 04:14:11.861226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-10 04:14:11.861237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.486 [2024-12-10 04:14:11.861267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.747 [2024-12-10 04:14:11.871139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.747 [2024-12-10 04:14:11.871292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.747 [2024-12-10 04:14:11.871317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.747 [2024-12-10 04:14:11.871331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.747 [2024-12-10 04:14:11.871343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.747 [2024-12-10 04:14:11.871378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.747 qpair failed and we were unable to recover it. 00:26:17.747 [2024-12-10 04:14:11.881146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.747 [2024-12-10 04:14:11.881250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.747 [2024-12-10 04:14:11.881277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.747 [2024-12-10 04:14:11.881291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.747 [2024-12-10 04:14:11.881303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.747 [2024-12-10 04:14:11.881333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.747 qpair failed and we were unable to recover it. 00:26:17.747 [2024-12-10 04:14:11.891155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.747 [2024-12-10 04:14:11.891261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.747 [2024-12-10 04:14:11.891287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.747 [2024-12-10 04:14:11.891301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.747 [2024-12-10 04:14:11.891313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.747 [2024-12-10 04:14:11.891343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.747 qpair failed and we were unable to recover it. 00:26:17.747 [2024-12-10 04:14:11.901146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.747 [2024-12-10 04:14:11.901247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.747 [2024-12-10 04:14:11.901273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.747 [2024-12-10 04:14:11.901288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.747 [2024-12-10 04:14:11.901300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.747 [2024-12-10 04:14:11.901329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.747 qpair failed and we were unable to recover it. 00:26:17.747 [2024-12-10 04:14:11.911295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.748 [2024-12-10 04:14:11.911393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.748 [2024-12-10 04:14:11.911419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.748 [2024-12-10 04:14:11.911433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.748 [2024-12-10 04:14:11.911444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.748 [2024-12-10 04:14:11.911474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.748 qpair failed and we were unable to recover it. 00:26:17.748 [2024-12-10 04:14:11.921215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.748 [2024-12-10 04:14:11.921303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.748 [2024-12-10 04:14:11.921330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.748 [2024-12-10 04:14:11.921345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.748 [2024-12-10 04:14:11.921357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.748 [2024-12-10 04:14:11.921398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.748 qpair failed and we were unable to recover it. 00:26:17.748 [2024-12-10 04:14:11.931252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.748 [2024-12-10 04:14:11.931342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.748 [2024-12-10 04:14:11.931368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.748 [2024-12-10 04:14:11.931382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.748 [2024-12-10 04:14:11.931394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.748 [2024-12-10 04:14:11.931423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.748 qpair failed and we were unable to recover it. 00:26:17.748 [2024-12-10 04:14:11.941301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.748 [2024-12-10 04:14:11.941390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.748 [2024-12-10 04:14:11.941419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.748 [2024-12-10 04:14:11.941435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.748 [2024-12-10 04:14:11.941447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.748 [2024-12-10 04:14:11.941478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.748 qpair failed and we were unable to recover it. 00:26:17.748 [2024-12-10 04:14:11.951353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.748 [2024-12-10 04:14:11.951468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.748 [2024-12-10 04:14:11.951494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.748 [2024-12-10 04:14:11.951507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.748 [2024-12-10 04:14:11.951519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.748 [2024-12-10 04:14:11.951559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.748 qpair failed and we were unable to recover it. 00:26:17.748 [2024-12-10 04:14:11.961358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.748 [2024-12-10 04:14:11.961473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.748 [2024-12-10 04:14:11.961500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.748 [2024-12-10 04:14:11.961520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.748 [2024-12-10 04:14:11.961533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.748 [2024-12-10 04:14:11.961571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.748 qpair failed and we were unable to recover it. 00:26:17.748 [2024-12-10 04:14:11.971411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.748 [2024-12-10 04:14:11.971511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.748 [2024-12-10 04:14:11.971537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.748 [2024-12-10 04:14:11.971560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.748 [2024-12-10 04:14:11.971573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.748 [2024-12-10 04:14:11.971603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.748 qpair failed and we were unable to recover it. 00:26:17.748 [2024-12-10 04:14:11.981430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.748 [2024-12-10 04:14:11.981515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.748 [2024-12-10 04:14:11.981540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.748 [2024-12-10 04:14:11.981564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.748 [2024-12-10 04:14:11.981577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.748 [2024-12-10 04:14:11.981606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.748 qpair failed and we were unable to recover it. 00:26:17.748 [2024-12-10 04:14:11.991449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.748 [2024-12-10 04:14:11.991561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.748 [2024-12-10 04:14:11.991597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.748 [2024-12-10 04:14:11.991612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.748 [2024-12-10 04:14:11.991624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.748 [2024-12-10 04:14:11.991654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.748 qpair failed and we were unable to recover it. 00:26:17.748 [2024-12-10 04:14:12.001472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.748 [2024-12-10 04:14:12.001602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.748 [2024-12-10 04:14:12.001629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.748 [2024-12-10 04:14:12.001643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.748 [2024-12-10 04:14:12.001654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.748 [2024-12-10 04:14:12.001691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.748 qpair failed and we were unable to recover it. 00:26:17.748 [2024-12-10 04:14:12.011488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.748 [2024-12-10 04:14:12.011586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.748 [2024-12-10 04:14:12.011612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.748 [2024-12-10 04:14:12.011625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.748 [2024-12-10 04:14:12.011637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.748 [2024-12-10 04:14:12.011667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.748 qpair failed and we were unable to recover it. 00:26:17.748 [2024-12-10 04:14:12.021488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.748 [2024-12-10 04:14:12.021581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.748 [2024-12-10 04:14:12.021608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.748 [2024-12-10 04:14:12.021622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.748 [2024-12-10 04:14:12.021633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.748 [2024-12-10 04:14:12.021662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.748 qpair failed and we were unable to recover it. 00:26:17.748 [2024-12-10 04:14:12.031572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.748 [2024-12-10 04:14:12.031673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.748 [2024-12-10 04:14:12.031698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.748 [2024-12-10 04:14:12.031712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.748 [2024-12-10 04:14:12.031723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.748 [2024-12-10 04:14:12.031765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.748 qpair failed and we were unable to recover it. 00:26:17.748 [2024-12-10 04:14:12.041592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.748 [2024-12-10 04:14:12.041712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.748 [2024-12-10 04:14:12.041738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.748 [2024-12-10 04:14:12.041752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.749 [2024-12-10 04:14:12.041764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.749 [2024-12-10 04:14:12.041793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.749 qpair failed and we were unable to recover it. 00:26:17.749 [2024-12-10 04:14:12.051619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.749 [2024-12-10 04:14:12.051741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.749 [2024-12-10 04:14:12.051767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.749 [2024-12-10 04:14:12.051781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.749 [2024-12-10 04:14:12.051793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.749 [2024-12-10 04:14:12.051823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.749 qpair failed and we were unable to recover it. 00:26:17.749 [2024-12-10 04:14:12.061630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.749 [2024-12-10 04:14:12.061712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.749 [2024-12-10 04:14:12.061738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.749 [2024-12-10 04:14:12.061752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.749 [2024-12-10 04:14:12.061763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.749 [2024-12-10 04:14:12.061792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.749 qpair failed and we were unable to recover it. 00:26:17.749 [2024-12-10 04:14:12.071657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.749 [2024-12-10 04:14:12.071772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.749 [2024-12-10 04:14:12.071800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.749 [2024-12-10 04:14:12.071817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.749 [2024-12-10 04:14:12.071829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.749 [2024-12-10 04:14:12.071860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.749 qpair failed and we were unable to recover it. 00:26:17.749 [2024-12-10 04:14:12.081722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.749 [2024-12-10 04:14:12.081821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.749 [2024-12-10 04:14:12.081847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.749 [2024-12-10 04:14:12.081861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.749 [2024-12-10 04:14:12.081873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba4000b90 00:26:17.749 [2024-12-10 04:14:12.081903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.749 qpair failed and we were unable to recover it. 00:26:17.749 [2024-12-10 04:14:12.091727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.749 [2024-12-10 04:14:12.091818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.749 [2024-12-10 04:14:12.091857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.749 [2024-12-10 04:14:12.091873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.749 [2024-12-10 04:14:12.091886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:17.749 [2024-12-10 04:14:12.091917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.749 qpair failed and we were unable to recover it. 00:26:17.749 [2024-12-10 04:14:12.101767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.749 [2024-12-10 04:14:12.101850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.749 [2024-12-10 04:14:12.101877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.749 [2024-12-10 04:14:12.101891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.749 [2024-12-10 04:14:12.101903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1559fa0 00:26:17.749 [2024-12-10 04:14:12.101932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.749 qpair failed and we were unable to recover it. 00:26:17.749 [2024-12-10 04:14:12.111867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.749 [2024-12-10 04:14:12.112007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.749 [2024-12-10 04:14:12.112039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.749 [2024-12-10 04:14:12.112055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.749 [2024-12-10 04:14:12.112068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5bb0000b90 00:26:17.749 [2024-12-10 04:14:12.112101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.749 qpair failed and we were unable to recover it. 00:26:17.749 [2024-12-10 04:14:12.121816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.749 [2024-12-10 04:14:12.121904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.749 [2024-12-10 04:14:12.121930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.749 [2024-12-10 04:14:12.121954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.749 [2024-12-10 04:14:12.121973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5bb0000b90 00:26:17.749 [2024-12-10 04:14:12.122017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.749 qpair failed and we were unable to recover it. 00:26:18.008 [2024-12-10 04:14:12.131882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.008 [2024-12-10 04:14:12.131991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.008 [2024-12-10 04:14:12.132027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.008 [2024-12-10 04:14:12.132044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.008 [2024-12-10 04:14:12.132057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:18.008 [2024-12-10 04:14:12.132096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.008 qpair failed and we were unable to recover it. 00:26:18.008 [2024-12-10 04:14:12.141882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.008 [2024-12-10 04:14:12.141982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.008 [2024-12-10 04:14:12.142009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.008 [2024-12-10 04:14:12.142023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.008 [2024-12-10 04:14:12.142035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:26:18.009 [2024-12-10 04:14:12.142065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.009 qpair failed and we were unable to recover it. 00:26:18.009 [2024-12-10 04:14:12.142170] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:26:18.009 A controller has encountered a failure and is being reset. 00:26:18.009 Controller properly reset. 00:26:18.009 Initializing NVMe Controllers 00:26:18.009 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:18.009 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:18.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:18.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:18.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:18.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:18.009 Initialization complete. Launching workers. 00:26:18.009 Starting thread on core 1 00:26:18.009 Starting thread on core 2 00:26:18.009 Starting thread on core 3 00:26:18.009 Starting thread on core 0 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:18.009 00:26:18.009 real 0m10.823s 00:26:18.009 user 0m19.011s 00:26:18.009 sys 0m5.259s 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:18.009 ************************************ 00:26:18.009 END TEST nvmf_target_disconnect_tc2 00:26:18.009 ************************************ 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:18.009 rmmod nvme_tcp 00:26:18.009 rmmod nvme_fabrics 00:26:18.009 rmmod nvme_keyring 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2506466 ']' 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2506466 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2506466 ']' 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2506466 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2506466 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2506466' 00:26:18.009 killing process with pid 2506466 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2506466 00:26:18.009 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2506466 00:26:18.267 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:18.267 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:18.267 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:18.267 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:26:18.267 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:26:18.267 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:26:18.267 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:18.267 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:18.267 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:18.267 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.267 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.267 04:14:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.805 04:14:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:20.806 00:26:20.806 real 0m15.828s 00:26:20.806 user 0m45.629s 00:26:20.806 sys 0m7.392s 00:26:20.806 04:14:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.806 04:14:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:20.806 ************************************ 00:26:20.806 END TEST nvmf_target_disconnect 00:26:20.806 ************************************ 00:26:20.806 04:14:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:20.806 00:26:20.806 real 5m6.500s 00:26:20.806 user 10m51.905s 00:26:20.806 sys 1m13.843s 00:26:20.806 04:14:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.806 04:14:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.806 ************************************ 00:26:20.806 END TEST nvmf_host 00:26:20.806 ************************************ 00:26:20.806 04:14:14 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:20.806 04:14:14 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:26:20.806 04:14:14 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:20.806 04:14:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:20.806 04:14:14 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.806 04:14:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:20.806 ************************************ 00:26:20.806 START TEST nvmf_target_core_interrupt_mode 00:26:20.806 ************************************ 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:20.806 * Looking for test storage... 00:26:20.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:20.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.806 --rc genhtml_branch_coverage=1 00:26:20.806 --rc genhtml_function_coverage=1 00:26:20.806 --rc genhtml_legend=1 00:26:20.806 --rc geninfo_all_blocks=1 00:26:20.806 --rc geninfo_unexecuted_blocks=1 00:26:20.806 00:26:20.806 ' 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:20.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.806 --rc genhtml_branch_coverage=1 00:26:20.806 --rc genhtml_function_coverage=1 00:26:20.806 --rc genhtml_legend=1 00:26:20.806 --rc geninfo_all_blocks=1 00:26:20.806 --rc geninfo_unexecuted_blocks=1 00:26:20.806 00:26:20.806 ' 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:20.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.806 --rc genhtml_branch_coverage=1 00:26:20.806 --rc genhtml_function_coverage=1 00:26:20.806 --rc genhtml_legend=1 00:26:20.806 --rc geninfo_all_blocks=1 00:26:20.806 --rc geninfo_unexecuted_blocks=1 00:26:20.806 00:26:20.806 ' 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:20.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.806 --rc genhtml_branch_coverage=1 00:26:20.806 --rc genhtml_function_coverage=1 00:26:20.806 --rc genhtml_legend=1 00:26:20.806 --rc geninfo_all_blocks=1 00:26:20.806 --rc geninfo_unexecuted_blocks=1 00:26:20.806 00:26:20.806 ' 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.806 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:20.807 ************************************ 00:26:20.807 START TEST nvmf_abort 00:26:20.807 ************************************ 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:20.807 * Looking for test storage... 00:26:20.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:20.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.807 --rc genhtml_branch_coverage=1 00:26:20.807 --rc genhtml_function_coverage=1 00:26:20.807 --rc genhtml_legend=1 00:26:20.807 --rc geninfo_all_blocks=1 00:26:20.807 --rc geninfo_unexecuted_blocks=1 00:26:20.807 00:26:20.807 ' 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:20.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.807 --rc genhtml_branch_coverage=1 00:26:20.807 --rc genhtml_function_coverage=1 00:26:20.807 --rc genhtml_legend=1 00:26:20.807 --rc geninfo_all_blocks=1 00:26:20.807 --rc geninfo_unexecuted_blocks=1 00:26:20.807 00:26:20.807 ' 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:20.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.807 --rc genhtml_branch_coverage=1 00:26:20.807 --rc genhtml_function_coverage=1 00:26:20.807 --rc genhtml_legend=1 00:26:20.807 --rc geninfo_all_blocks=1 00:26:20.807 --rc geninfo_unexecuted_blocks=1 00:26:20.807 00:26:20.807 ' 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:20.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.807 --rc genhtml_branch_coverage=1 00:26:20.807 --rc genhtml_function_coverage=1 00:26:20.807 --rc genhtml_legend=1 00:26:20.807 --rc geninfo_all_blocks=1 00:26:20.807 --rc geninfo_unexecuted_blocks=1 00:26:20.807 00:26:20.807 ' 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.807 04:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.807 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:26:20.808 04:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:23.342 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:23.343 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:23.343 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:23.343 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:23.343 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:23.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:23.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:26:23.343 00:26:23.343 --- 10.0.0.2 ping statistics --- 00:26:23.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.343 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:23.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:23.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:26:23.343 00:26:23.343 --- 10.0.0.1 ping statistics --- 00:26:23.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.343 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2509278 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2509278 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2509278 ']' 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.343 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:23.344 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:23.344 [2024-12-10 04:14:17.455302] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:23.344 [2024-12-10 04:14:17.456422] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:23.344 [2024-12-10 04:14:17.456488] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.344 [2024-12-10 04:14:17.527500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:23.344 [2024-12-10 04:14:17.588801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.344 [2024-12-10 04:14:17.588874] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.344 [2024-12-10 04:14:17.588903] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.344 [2024-12-10 04:14:17.588915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.344 [2024-12-10 04:14:17.588926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.344 [2024-12-10 04:14:17.590691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.344 [2024-12-10 04:14:17.590720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:23.344 [2024-12-10 04:14:17.590723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.344 [2024-12-10 04:14:17.682095] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:23.344 [2024-12-10 04:14:17.682302] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:23.344 [2024-12-10 04:14:17.682328] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:23.344 [2024-12-10 04:14:17.682578] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:23.344 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:23.344 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:26:23.344 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:23.344 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:23.344 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:23.604 [2024-12-10 04:14:17.739519] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:23.604 Malloc0 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:23.604 Delay0 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:23.604 [2024-12-10 04:14:17.815740] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.604 04:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:26:23.604 [2024-12-10 04:14:17.966663] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:26.136 Initializing NVMe Controllers 00:26:26.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:26.136 controller IO queue size 128 less than required 00:26:26.136 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:26:26.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:26.136 Initialization complete. Launching workers. 00:26:26.136 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28516 00:26:26.136 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28577, failed to submit 66 00:26:26.136 success 28516, unsuccessful 61, failed 0 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:26.136 rmmod nvme_tcp 00:26:26.136 rmmod nvme_fabrics 00:26:26.136 rmmod nvme_keyring 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2509278 ']' 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2509278 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2509278 ']' 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2509278 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2509278 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2509278' 00:26:26.136 killing process with pid 2509278 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2509278 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2509278 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.136 04:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:28.672 00:26:28.672 real 0m7.594s 00:26:28.672 user 0m9.643s 00:26:28.672 sys 0m3.023s 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:28.672 ************************************ 00:26:28.672 END TEST nvmf_abort 00:26:28.672 ************************************ 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:28.672 ************************************ 00:26:28.672 START TEST nvmf_ns_hotplug_stress 00:26:28.672 ************************************ 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:28.672 * Looking for test storage... 00:26:28.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:26:28.672 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:28.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.673 --rc genhtml_branch_coverage=1 00:26:28.673 --rc genhtml_function_coverage=1 00:26:28.673 --rc genhtml_legend=1 00:26:28.673 --rc geninfo_all_blocks=1 00:26:28.673 --rc geninfo_unexecuted_blocks=1 00:26:28.673 00:26:28.673 ' 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:28.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.673 --rc genhtml_branch_coverage=1 00:26:28.673 --rc genhtml_function_coverage=1 00:26:28.673 --rc genhtml_legend=1 00:26:28.673 --rc geninfo_all_blocks=1 00:26:28.673 --rc geninfo_unexecuted_blocks=1 00:26:28.673 00:26:28.673 ' 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:28.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.673 --rc genhtml_branch_coverage=1 00:26:28.673 --rc genhtml_function_coverage=1 00:26:28.673 --rc genhtml_legend=1 00:26:28.673 --rc geninfo_all_blocks=1 00:26:28.673 --rc geninfo_unexecuted_blocks=1 00:26:28.673 00:26:28.673 ' 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:28.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.673 --rc genhtml_branch_coverage=1 00:26:28.673 --rc genhtml_function_coverage=1 00:26:28.673 --rc genhtml_legend=1 00:26:28.673 --rc geninfo_all_blocks=1 00:26:28.673 --rc geninfo_unexecuted_blocks=1 00:26:28.673 00:26:28.673 ' 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:26:28.673 04:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:30.576 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:30.576 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:30.576 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:30.577 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:30.577 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:30.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:26:30.577 00:26:30.577 --- 10.0.0.2 ping statistics --- 00:26:30.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.577 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:30.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:26:30.577 00:26:30.577 --- 10.0.0.1 ping statistics --- 00:26:30.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.577 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2511619 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2511619 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2511619 ']' 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:30.577 04:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:30.846 [2024-12-10 04:14:24.982064] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:30.846 [2024-12-10 04:14:24.983212] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:30.846 [2024-12-10 04:14:24.983270] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.846 [2024-12-10 04:14:25.054760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:30.846 [2024-12-10 04:14:25.112113] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.846 [2024-12-10 04:14:25.112167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.846 [2024-12-10 04:14:25.112196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.846 [2024-12-10 04:14:25.112206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.847 [2024-12-10 04:14:25.112215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.847 [2024-12-10 04:14:25.113728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.847 [2024-12-10 04:14:25.113757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:30.847 [2024-12-10 04:14:25.113761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.847 [2024-12-10 04:14:25.203017] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:30.847 [2024-12-10 04:14:25.203244] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:30.847 [2024-12-10 04:14:25.203275] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:30.847 [2024-12-10 04:14:25.203498] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:31.110 04:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.110 04:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:26:31.110 04:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:31.110 04:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:31.110 04:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:31.110 04:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:31.110 04:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:26:31.110 04:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:31.371 [2024-12-10 04:14:25.514535] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.371 04:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:31.631 04:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:31.890 [2024-12-10 04:14:26.058947] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.890 04:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:32.151 04:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:26:32.410 Malloc0 00:26:32.410 04:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:32.669 Delay0 00:26:32.669 04:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:32.969 04:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:26:33.258 NULL1 00:26:33.258 04:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:26:33.516 04:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2511921 00:26:33.516 04:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:33.516 04:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:33.516 04:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:26:33.774 04:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:34.032 04:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:26:34.032 04:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:26:34.290 true 00:26:34.290 04:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:34.290 04:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:34.548 04:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:35.114 04:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:26:35.114 04:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:26:35.372 true 00:26:35.372 04:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:35.372 04:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:35.630 04:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:35.888 04:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:26:35.888 04:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:26:36.146 true 00:26:36.146 04:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:36.146 04:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:36.403 04:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:36.661 04:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:26:36.662 04:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:26:36.919 true 00:26:36.919 04:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:36.919 04:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:37.855 Read completed with error (sct=0, sc=11) 00:26:37.855 04:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:38.113 04:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:26:38.113 04:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:26:38.371 true 00:26:38.371 04:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:38.371 04:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:38.629 04:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:39.196 04:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:26:39.196 04:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:26:39.196 true 00:26:39.196 04:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:39.196 04:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:39.453 04:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:40.019 04:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:26:40.019 04:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:26:40.019 true 00:26:40.019 04:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:40.019 04:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:40.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:40.956 04:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:40.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:41.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:41.214 04:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:26:41.214 04:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:26:41.472 true 00:26:41.730 04:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:41.730 04:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:41.988 04:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:42.246 04:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:26:42.246 04:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:26:42.504 true 00:26:42.504 04:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:42.504 04:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:42.761 04:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:43.019 04:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:26:43.019 04:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:26:43.277 true 00:26:43.277 04:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:43.277 04:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:44.213 04:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:44.471 04:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:26:44.471 04:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:26:44.729 true 00:26:44.729 04:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:44.729 04:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:44.987 04:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:45.245 04:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:26:45.245 04:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:26:45.503 true 00:26:45.503 04:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:45.503 04:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:45.761 04:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:46.019 04:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:26:46.019 04:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:26:46.277 true 00:26:46.277 04:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:46.277 04:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:47.651 04:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:47.651 04:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:26:47.651 04:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:26:47.909 true 00:26:47.909 04:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:47.909 04:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:48.166 04:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:48.423 04:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:26:48.423 04:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:26:48.680 true 00:26:48.680 04:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:48.680 04:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:48.937 04:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:49.501 04:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:26:49.501 04:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:26:49.501 true 00:26:49.501 04:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:49.501 04:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:50.437 04:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:50.695 04:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:26:50.695 04:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:26:50.951 true 00:26:50.951 04:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:50.951 04:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:51.208 04:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:51.466 04:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:26:51.466 04:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:26:51.725 true 00:26:51.725 04:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:51.725 04:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:51.983 04:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:52.241 04:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:26:52.241 04:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:26:52.807 true 00:26:52.807 04:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:52.807 04:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:53.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:53.741 04:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:53.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:53.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:53.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:53.741 04:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:26:53.741 04:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:26:54.007 true 00:26:54.007 04:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:54.007 04:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:54.268 04:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:54.526 04:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:26:54.526 04:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:26:54.784 true 00:26:55.042 04:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:55.042 04:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:55.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.979 04:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:55.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.979 04:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:26:55.979 04:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:26:56.237 true 00:26:56.237 04:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:56.237 04:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:56.495 04:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:57.061 04:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:26:57.061 04:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:26:57.061 true 00:26:57.061 04:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:57.061 04:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:57.319 04:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:57.577 04:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:26:57.577 04:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:26:57.835 true 00:26:57.835 04:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:57.835 04:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:59.210 04:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:59.210 04:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:26:59.210 04:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:26:59.467 true 00:26:59.467 04:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:26:59.467 04:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:59.725 04:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:59.983 04:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:26:59.983 04:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:27:00.241 true 00:27:00.241 04:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:27:00.241 04:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:00.499 04:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:00.757 04:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:27:00.757 04:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:27:01.014 true 00:27:01.014 04:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:27:01.014 04:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:01.952 04:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:02.210 04:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:27:02.210 04:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:27:02.468 true 00:27:02.468 04:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:27:02.468 04:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:02.726 04:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:02.983 04:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:27:02.984 04:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:27:03.241 true 00:27:03.241 04:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:27:03.241 04:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:03.499 04:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:03.757 Initializing NVMe Controllers 00:27:03.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:03.757 Controller IO queue size 128, less than required. 00:27:03.757 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:03.757 Controller IO queue size 128, less than required. 00:27:03.757 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:03.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:03.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:03.757 Initialization complete. Launching workers. 00:27:03.757 ======================================================== 00:27:03.757 Latency(us) 00:27:03.757 Device Information : IOPS MiB/s Average min max 00:27:03.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 351.36 0.17 121147.53 3103.54 1080817.56 00:27:03.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7157.39 3.49 17829.17 2781.63 445400.11 00:27:03.757 ======================================================== 00:27:03.757 Total : 7508.76 3.67 22663.84 2781.63 1080817.56 00:27:03.757 00:27:04.015 04:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:27:04.015 04:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:27:04.273 true 00:27:04.273 04:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2511921 00:27:04.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2511921) - No such process 00:27:04.273 04:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2511921 00:27:04.273 04:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:04.531 04:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:04.789 04:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:27:04.789 04:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:27:04.789 04:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:27:04.789 04:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:04.789 04:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:27:05.047 null0 00:27:05.047 04:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:05.047 04:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:05.047 04:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:27:05.305 null1 00:27:05.305 04:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:05.305 04:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:05.305 04:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:27:05.563 null2 00:27:05.563 04:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:05.563 04:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:05.563 04:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:27:05.821 null3 00:27:05.821 04:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:05.821 04:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:05.821 04:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:27:06.079 null4 00:27:06.079 04:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:06.079 04:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:06.079 04:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:27:06.337 null5 00:27:06.337 04:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:06.337 04:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:06.337 04:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:27:06.595 null6 00:27:06.595 04:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:06.595 04:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:06.595 04:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:27:06.854 null7 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2515999 2516000 2516002 2516004 2516006 2516008 2516010 2516012 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.854 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:07.113 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:07.113 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:07.113 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:07.113 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:07.113 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:07.113 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:07.113 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:07.113 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:07.679 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.679 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.679 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:07.679 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.679 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.680 04:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:07.937 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:07.938 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:07.938 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:07.938 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:07.938 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:07.938 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:07.938 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:07.938 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.196 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:08.454 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:08.454 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:08.454 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:08.454 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:08.454 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:08.454 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:08.454 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.454 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:08.713 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.713 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.713 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:08.713 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.713 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.713 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:08.713 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.713 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.713 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:08.713 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.713 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.713 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:08.713 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.713 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.713 04:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:08.713 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.713 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.713 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:08.713 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.713 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.713 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:08.713 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.713 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.713 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:08.972 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:08.972 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:08.972 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:08.972 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:08.972 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.972 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:08.972 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:08.972 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:09.230 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:09.796 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:09.796 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:09.796 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:09.796 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:09.796 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:09.796 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:09.796 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:09.796 04:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:10.053 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:10.053 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:10.054 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:10.312 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:10.312 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:10.312 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:10.312 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:10.312 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:10.312 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:10.312 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:10.312 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:10.570 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:10.570 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:10.570 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:10.570 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:10.570 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:10.570 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:10.570 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:10.570 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:10.570 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:10.570 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:10.570 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:10.570 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:10.570 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:10.570 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:10.570 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:10.571 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:10.571 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:10.571 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:10.571 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:10.571 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:10.571 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:10.571 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:10.571 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:10.571 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:10.829 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:10.829 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:10.829 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:10.829 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:10.829 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:10.829 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:10.829 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:10.829 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:11.087 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:11.344 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:11.344 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:11.344 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:11.344 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:11.344 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:11.344 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:11.344 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:11.344 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:11.345 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:11.345 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:11.603 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:11.861 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:11.861 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:11.861 04:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:11.861 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:12.119 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:12.119 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:12.119 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:12.119 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:12.119 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:12.119 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:12.119 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:12.119 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:12.377 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:12.635 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:12.635 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:12.635 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:12.635 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:12.635 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:12.635 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:12.635 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:12.635 04:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:12.893 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:12.893 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:12.893 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:12.893 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:12.893 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:12.893 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:12.893 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:12.893 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:12.893 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:12.893 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:12.893 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:12.893 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:12.893 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:12.893 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:13.151 rmmod nvme_tcp 00:27:13.151 rmmod nvme_fabrics 00:27:13.151 rmmod nvme_keyring 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2511619 ']' 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2511619 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2511619 ']' 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2511619 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2511619 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2511619' 00:27:13.151 killing process with pid 2511619 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2511619 00:27:13.151 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2511619 00:27:13.441 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:13.441 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:13.441 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:13.441 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:27:13.441 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:27:13.441 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:13.441 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:27:13.441 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:13.441 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:13.441 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.441 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.441 04:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.395 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:15.395 00:27:15.395 real 0m47.182s 00:27:15.395 user 3m20.410s 00:27:15.395 sys 0m20.841s 00:27:15.395 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:15.395 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:15.395 ************************************ 00:27:15.395 END TEST nvmf_ns_hotplug_stress 00:27:15.395 ************************************ 00:27:15.395 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:15.395 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:15.395 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:15.395 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:15.395 ************************************ 00:27:15.395 START TEST nvmf_delete_subsystem 00:27:15.395 ************************************ 00:27:15.395 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:15.655 * Looking for test storage... 00:27:15.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:15.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.655 --rc genhtml_branch_coverage=1 00:27:15.655 --rc genhtml_function_coverage=1 00:27:15.655 --rc genhtml_legend=1 00:27:15.655 --rc geninfo_all_blocks=1 00:27:15.655 --rc geninfo_unexecuted_blocks=1 00:27:15.655 00:27:15.655 ' 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:15.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.655 --rc genhtml_branch_coverage=1 00:27:15.655 --rc genhtml_function_coverage=1 00:27:15.655 --rc genhtml_legend=1 00:27:15.655 --rc geninfo_all_blocks=1 00:27:15.655 --rc geninfo_unexecuted_blocks=1 00:27:15.655 00:27:15.655 ' 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:15.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.655 --rc genhtml_branch_coverage=1 00:27:15.655 --rc genhtml_function_coverage=1 00:27:15.655 --rc genhtml_legend=1 00:27:15.655 --rc geninfo_all_blocks=1 00:27:15.655 --rc geninfo_unexecuted_blocks=1 00:27:15.655 00:27:15.655 ' 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:15.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.655 --rc genhtml_branch_coverage=1 00:27:15.655 --rc genhtml_function_coverage=1 00:27:15.655 --rc genhtml_legend=1 00:27:15.655 --rc geninfo_all_blocks=1 00:27:15.655 --rc geninfo_unexecuted_blocks=1 00:27:15.655 00:27:15.655 ' 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.655 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:27:15.656 04:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:18.192 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:18.192 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:18.192 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:18.193 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:18.193 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.193 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:18.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:27:18.193 00:27:18.193 --- 10.0.0.2 ping statistics --- 00:27:18.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.193 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:27:18.193 00:27:18.193 --- 10.0.0.1 ping statistics --- 00:27:18.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.193 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2519428 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2519428 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2519428 ']' 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.193 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:18.193 [2024-12-10 04:15:12.182827] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:18.193 [2024-12-10 04:15:12.183908] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:27:18.193 [2024-12-10 04:15:12.183961] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.193 [2024-12-10 04:15:12.278451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:18.193 [2024-12-10 04:15:12.349611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.193 [2024-12-10 04:15:12.349673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.193 [2024-12-10 04:15:12.349715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.193 [2024-12-10 04:15:12.349738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.193 [2024-12-10 04:15:12.349780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.193 [2024-12-10 04:15:12.351450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.193 [2024-12-10 04:15:12.351460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.193 [2024-12-10 04:15:12.449530] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:18.193 [2024-12-10 04:15:12.449540] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:18.193 [2024-12-10 04:15:12.449860] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:18.194 [2024-12-10 04:15:12.540257] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:18.194 [2024-12-10 04:15:12.556566] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:18.194 NULL1 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.194 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:18.452 Delay0 00:27:18.452 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.452 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:18.452 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.452 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:18.452 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.452 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2519458 00:27:18.452 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:18.452 04:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:27:18.453 [2024-12-10 04:15:12.632643] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:20.360 04:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:20.360 04:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.360 04:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 [2024-12-10 04:15:14.689876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2103680 is same with the state(6) to be set 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 starting I/O failed: -6 00:27:20.360 Read completed with error (sct=0, sc=8) 00:27:20.360 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 starting I/O failed: -6 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 [2024-12-10 04:15:14.690676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3cd4000c40 is same with the state(6) to be set 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:20.361 Write completed with error (sct=0, sc=8) 00:27:20.361 Read completed with error (sct=0, sc=8) 00:27:21.297 [2024-12-10 04:15:15.649681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21049b0 is same with the state(6) to be set 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Write completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Write completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Write completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 [2024-12-10 04:15:15.692181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21034a0 is same with the state(6) to be set 00:27:21.555 Write completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Write completed with error (sct=0, sc=8) 00:27:21.555 Write completed with error (sct=0, sc=8) 00:27:21.555 Write completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Write completed with error (sct=0, sc=8) 00:27:21.555 Write completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.555 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Write completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Write completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 [2024-12-10 04:15:15.692928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21032c0 is same with the state(6) to be set 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Write completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Write completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Write completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Write completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Write completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 [2024-12-10 04:15:15.693090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2103860 is same with the state(6) to be set 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Write completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Write completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 Write completed with error (sct=0, sc=8) 00:27:21.556 Write completed with error (sct=0, sc=8) 00:27:21.556 Write completed with error (sct=0, sc=8) 00:27:21.556 Write completed with error (sct=0, sc=8) 00:27:21.556 Read completed with error (sct=0, sc=8) 00:27:21.556 [2024-12-10 04:15:15.693863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3cd400d020 is same with the state(6) to be set 00:27:21.556 Initializing NVMe Controllers 00:27:21.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.556 Controller IO queue size 128, less than required. 00:27:21.556 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:21.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:21.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:21.556 Initialization complete. Launching workers. 00:27:21.556 ======================================================== 00:27:21.556 Latency(us) 00:27:21.556 Device Information : IOPS MiB/s Average min max 00:27:21.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.77 0.08 973545.18 811.52 1042881.85 00:27:21.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.84 0.08 881490.49 369.49 1013516.57 00:27:21.556 ======================================================== 00:27:21.556 Total : 319.61 0.16 928947.26 369.49 1042881.85 00:27:21.556 00:27:21.556 [2024-12-10 04:15:15.694317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21049b0 (9): Bad file descriptor 00:27:21.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:21.556 04:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.556 04:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:27:21.556 04:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2519458 00:27:21.556 04:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2519458 00:27:22.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2519458) - No such process 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2519458 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2519458 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2519458 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:22.124 [2024-12-10 04:15:16.216423] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2519970 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2519970 00:27:22.124 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:22.124 [2024-12-10 04:15:16.275883] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:22.383 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:22.383 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2519970 00:27:22.383 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:22.951 04:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:22.951 04:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2519970 00:27:22.951 04:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:23.517 04:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:23.517 04:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2519970 00:27:23.517 04:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:24.083 04:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:24.083 04:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2519970 00:27:24.083 04:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:24.652 04:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:24.652 04:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2519970 00:27:24.652 04:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:24.911 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:24.911 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2519970 00:27:24.911 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:25.477 Initializing NVMe Controllers 00:27:25.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:25.477 Controller IO queue size 128, less than required. 00:27:25.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:25.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:25.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:25.477 Initialization complete. Launching workers. 00:27:25.477 ======================================================== 00:27:25.477 Latency(us) 00:27:25.477 Device Information : IOPS MiB/s Average min max 00:27:25.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004635.44 1000270.92 1011769.81 00:27:25.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004625.77 1000211.87 1040844.24 00:27:25.477 ======================================================== 00:27:25.477 Total : 256.00 0.12 1004630.61 1000211.87 1040844.24 00:27:25.477 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2519970 00:27:25.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2519970) - No such process 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2519970 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:25.477 rmmod nvme_tcp 00:27:25.477 rmmod nvme_fabrics 00:27:25.477 rmmod nvme_keyring 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2519428 ']' 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2519428 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2519428 ']' 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2519428 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2519428 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2519428' 00:27:25.477 killing process with pid 2519428 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2519428 00:27:25.477 04:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2519428 00:27:25.736 04:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:25.736 04:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:25.736 04:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:25.736 04:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:27:25.736 04:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:27:25.736 04:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:25.736 04:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:27:25.736 04:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:25.737 04:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:25.737 04:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.737 04:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.737 04:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.283 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:28.283 00:27:28.283 real 0m12.374s 00:27:28.283 user 0m24.715s 00:27:28.283 sys 0m3.733s 00:27:28.283 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:28.283 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:28.283 ************************************ 00:27:28.283 END TEST nvmf_delete_subsystem 00:27:28.283 ************************************ 00:27:28.283 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:28.283 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:28.283 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:28.283 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:28.283 ************************************ 00:27:28.283 START TEST nvmf_host_management 00:27:28.283 ************************************ 00:27:28.283 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:28.283 * Looking for test storage... 00:27:28.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:28.283 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:28.283 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:27:28.283 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:28.283 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:28.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.284 --rc genhtml_branch_coverage=1 00:27:28.284 --rc genhtml_function_coverage=1 00:27:28.284 --rc genhtml_legend=1 00:27:28.284 --rc geninfo_all_blocks=1 00:27:28.284 --rc geninfo_unexecuted_blocks=1 00:27:28.284 00:27:28.284 ' 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:28.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.284 --rc genhtml_branch_coverage=1 00:27:28.284 --rc genhtml_function_coverage=1 00:27:28.284 --rc genhtml_legend=1 00:27:28.284 --rc geninfo_all_blocks=1 00:27:28.284 --rc geninfo_unexecuted_blocks=1 00:27:28.284 00:27:28.284 ' 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:28.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.284 --rc genhtml_branch_coverage=1 00:27:28.284 --rc genhtml_function_coverage=1 00:27:28.284 --rc genhtml_legend=1 00:27:28.284 --rc geninfo_all_blocks=1 00:27:28.284 --rc geninfo_unexecuted_blocks=1 00:27:28.284 00:27:28.284 ' 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:28.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.284 --rc genhtml_branch_coverage=1 00:27:28.284 --rc genhtml_function_coverage=1 00:27:28.284 --rc genhtml_legend=1 00:27:28.284 --rc geninfo_all_blocks=1 00:27:28.284 --rc geninfo_unexecuted_blocks=1 00:27:28.284 00:27:28.284 ' 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:28.284 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:28.285 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:28.285 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:27:28.285 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:28.285 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:28.285 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:28.285 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:28.285 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:28.285 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.285 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:28.285 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.285 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:28.285 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:28.285 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:27:28.285 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:30.190 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:30.190 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:30.190 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:30.190 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:30.190 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.191 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.450 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.450 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:30.450 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:30.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:27:30.451 00:27:30.451 --- 10.0.0.2 ping statistics --- 00:27:30.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.451 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:27:30.451 00:27:30.451 --- 10.0.0.1 ping statistics --- 00:27:30.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.451 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2522317 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2522317 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2522317 ']' 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.451 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:30.451 [2024-12-10 04:15:24.662700] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:30.451 [2024-12-10 04:15:24.663763] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:27:30.451 [2024-12-10 04:15:24.663838] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.451 [2024-12-10 04:15:24.740060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:30.451 [2024-12-10 04:15:24.801082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.451 [2024-12-10 04:15:24.801145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.451 [2024-12-10 04:15:24.801159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.451 [2024-12-10 04:15:24.801170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.451 [2024-12-10 04:15:24.801179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.451 [2024-12-10 04:15:24.802939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.451 [2024-12-10 04:15:24.803001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:30.451 [2024-12-10 04:15:24.803066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:30.451 [2024-12-10 04:15:24.803069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.710 [2024-12-10 04:15:24.903404] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:30.710 [2024-12-10 04:15:24.903652] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:30.710 [2024-12-10 04:15:24.903973] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:30.710 [2024-12-10 04:15:24.904654] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:30.710 [2024-12-10 04:15:24.904916] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:30.710 [2024-12-10 04:15:24.955270] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.710 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:30.710 Malloc0 00:27:30.710 [2024-12-10 04:15:25.031489] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.710 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.710 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:27:30.710 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:30.710 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:30.710 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2522367 00:27:30.710 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2522367 /var/tmp/bdevperf.sock 00:27:30.710 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2522367 ']' 00:27:30.710 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:30.710 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:27:30.711 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:30.711 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.711 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:30.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:30.711 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:30.711 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.711 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:30.711 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:30.711 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.711 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.711 { 00:27:30.711 "params": { 00:27:30.711 "name": "Nvme$subsystem", 00:27:30.711 "trtype": "$TEST_TRANSPORT", 00:27:30.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.711 "adrfam": "ipv4", 00:27:30.711 "trsvcid": "$NVMF_PORT", 00:27:30.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.711 "hdgst": ${hdgst:-false}, 00:27:30.711 "ddgst": ${ddgst:-false} 00:27:30.711 }, 00:27:30.711 "method": "bdev_nvme_attach_controller" 00:27:30.711 } 00:27:30.711 EOF 00:27:30.711 )") 00:27:30.711 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:30.711 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:30.711 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:30.711 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:30.711 "params": { 00:27:30.711 "name": "Nvme0", 00:27:30.711 "trtype": "tcp", 00:27:30.711 "traddr": "10.0.0.2", 00:27:30.711 "adrfam": "ipv4", 00:27:30.711 "trsvcid": "4420", 00:27:30.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:30.711 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:30.711 "hdgst": false, 00:27:30.711 "ddgst": false 00:27:30.711 }, 00:27:30.711 "method": "bdev_nvme_attach_controller" 00:27:30.711 }' 00:27:30.969 [2024-12-10 04:15:25.115647] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:27:30.969 [2024-12-10 04:15:25.115727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522367 ] 00:27:30.969 [2024-12-10 04:15:25.185137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.969 [2024-12-10 04:15:25.245878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.228 Running I/O for 10 seconds... 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:27:31.486 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:27:31.748 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:27:31.748 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:31.748 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:31.748 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:31.748 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.748 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:31.748 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.748 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=553 00:27:31.748 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 553 -ge 100 ']' 00:27:31.748 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:27:31.748 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:27:31.748 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:27:31.748 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:31.748 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.748 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:31.748 [2024-12-10 04:15:25.995654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.748 [2024-12-10 04:15:25.995714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.748 [2024-12-10 04:15:25.995734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.748 [2024-12-10 04:15:25.995748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:25.995763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.749 [2024-12-10 04:15:25.995776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:25.995790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.749 [2024-12-10 04:15:25.995804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:25.995817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d89660 is same with the state(6) to be set 00:27:31.749 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.749 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:31.749 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.749 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:31.749 04:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.749 04:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:27:31.749 [2024-12-10 04:15:26.003718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.003747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.003773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.003789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.003805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.003819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.003834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.003859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.003874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.003888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.003903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.003917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.003932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.003947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.003962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.003976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.003992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.749 [2024-12-10 04:15:26.004771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.749 [2024-12-10 04:15:26.004786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.004800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.004815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.004829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.004860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.004875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.004890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.004905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.004920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.004934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.004949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.004964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.004979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.004993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.750 [2024-12-10 04:15:26.005717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.750 [2024-12-10 04:15:26.005835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d89660 (9): Bad file descriptor 00:27:31.750 [2024-12-10 04:15:26.006972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:31.750 task offset: 81920 on job bdev=Nvme0n1 fails 00:27:31.750 00:27:31.750 Latency(us) 00:27:31.750 [2024-12-10T03:15:26.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.750 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:31.750 Job: Nvme0n1 ended in about 0.41 seconds with error 00:27:31.750 Verification LBA range: start 0x0 length 0x400 00:27:31.750 Nvme0n1 : 0.41 1548.19 96.76 154.82 0.00 36514.42 2487.94 37865.24 00:27:31.750 [2024-12-10T03:15:26.139Z] =================================================================================================================== 00:27:31.750 [2024-12-10T03:15:26.139Z] Total : 1548.19 96.76 154.82 0.00 36514.42 2487.94 37865.24 00:27:31.750 [2024-12-10 04:15:26.008892] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:31.750 [2024-12-10 04:15:26.012622] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:27:32.688 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2522367 00:27:32.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2522367) - No such process 00:27:32.688 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:27:32.688 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:27:32.688 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:32.688 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:27:32.688 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:32.688 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:32.688 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:32.688 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:32.688 { 00:27:32.688 "params": { 00:27:32.688 "name": "Nvme$subsystem", 00:27:32.688 "trtype": "$TEST_TRANSPORT", 00:27:32.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.688 "adrfam": "ipv4", 00:27:32.688 "trsvcid": "$NVMF_PORT", 00:27:32.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.688 "hdgst": ${hdgst:-false}, 00:27:32.688 "ddgst": ${ddgst:-false} 00:27:32.688 }, 00:27:32.688 "method": "bdev_nvme_attach_controller" 00:27:32.688 } 00:27:32.688 EOF 00:27:32.688 )") 00:27:32.688 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:32.688 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:32.688 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:32.688 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:32.688 "params": { 00:27:32.689 "name": "Nvme0", 00:27:32.689 "trtype": "tcp", 00:27:32.689 "traddr": "10.0.0.2", 00:27:32.689 "adrfam": "ipv4", 00:27:32.689 "trsvcid": "4420", 00:27:32.689 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.689 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:32.689 "hdgst": false, 00:27:32.689 "ddgst": false 00:27:32.689 }, 00:27:32.689 "method": "bdev_nvme_attach_controller" 00:27:32.689 }' 00:27:32.689 [2024-12-10 04:15:27.056484] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:27:32.689 [2024-12-10 04:15:27.056598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522637 ] 00:27:32.947 [2024-12-10 04:15:27.127504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.947 [2024-12-10 04:15:27.187090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.205 Running I/O for 1 seconds... 00:27:34.141 1600.00 IOPS, 100.00 MiB/s 00:27:34.141 Latency(us) 00:27:34.141 [2024-12-10T03:15:28.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.141 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:34.141 Verification LBA range: start 0x0 length 0x400 00:27:34.141 Nvme0n1 : 1.02 1636.34 102.27 0.00 0.00 38481.18 6359.42 34369.99 00:27:34.141 [2024-12-10T03:15:28.530Z] =================================================================================================================== 00:27:34.141 [2024-12-10T03:15:28.530Z] Total : 1636.34 102.27 0.00 0.00 38481.18 6359.42 34369.99 00:27:34.401 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:27:34.401 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:27:34.401 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:34.401 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:34.401 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:27:34.401 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:34.401 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:27:34.401 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:34.401 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:34.402 rmmod nvme_tcp 00:27:34.402 rmmod nvme_fabrics 00:27:34.402 rmmod nvme_keyring 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2522317 ']' 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2522317 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2522317 ']' 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2522317 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2522317 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2522317' 00:27:34.402 killing process with pid 2522317 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2522317 00:27:34.402 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2522317 00:27:34.661 [2024-12-10 04:15:28.980787] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:27:34.661 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:34.661 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:34.661 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:34.661 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:27:34.661 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:27:34.661 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:34.661 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:27:34.661 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:34.661 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:34.661 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.661 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.661 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:37.197 00:27:37.197 real 0m8.904s 00:27:37.197 user 0m17.853s 00:27:37.197 sys 0m3.762s 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:37.197 ************************************ 00:27:37.197 END TEST nvmf_host_management 00:27:37.197 ************************************ 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:37.197 ************************************ 00:27:37.197 START TEST nvmf_lvol 00:27:37.197 ************************************ 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:37.197 * Looking for test storage... 00:27:37.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:37.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.197 --rc genhtml_branch_coverage=1 00:27:37.197 --rc genhtml_function_coverage=1 00:27:37.197 --rc genhtml_legend=1 00:27:37.197 --rc geninfo_all_blocks=1 00:27:37.197 --rc geninfo_unexecuted_blocks=1 00:27:37.197 00:27:37.197 ' 00:27:37.197 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:37.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.197 --rc genhtml_branch_coverage=1 00:27:37.198 --rc genhtml_function_coverage=1 00:27:37.198 --rc genhtml_legend=1 00:27:37.198 --rc geninfo_all_blocks=1 00:27:37.198 --rc geninfo_unexecuted_blocks=1 00:27:37.198 00:27:37.198 ' 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:37.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.198 --rc genhtml_branch_coverage=1 00:27:37.198 --rc genhtml_function_coverage=1 00:27:37.198 --rc genhtml_legend=1 00:27:37.198 --rc geninfo_all_blocks=1 00:27:37.198 --rc geninfo_unexecuted_blocks=1 00:27:37.198 00:27:37.198 ' 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:37.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.198 --rc genhtml_branch_coverage=1 00:27:37.198 --rc genhtml_function_coverage=1 00:27:37.198 --rc genhtml_legend=1 00:27:37.198 --rc geninfo_all_blocks=1 00:27:37.198 --rc geninfo_unexecuted_blocks=1 00:27:37.198 00:27:37.198 ' 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:27:37.198 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:39.107 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:39.107 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:39.107 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.107 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:39.107 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:39.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:27:39.108 00:27:39.108 --- 10.0.0.2 ping statistics --- 00:27:39.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.108 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:27:39.108 00:27:39.108 --- 10.0.0.1 ping statistics --- 00:27:39.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.108 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2524835 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2524835 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2524835 ']' 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:39.108 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:39.108 [2024-12-10 04:15:33.486112] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:39.108 [2024-12-10 04:15:33.487312] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:27:39.108 [2024-12-10 04:15:33.487367] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.366 [2024-12-10 04:15:33.562954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:39.366 [2024-12-10 04:15:33.619363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.366 [2024-12-10 04:15:33.619432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.366 [2024-12-10 04:15:33.619460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.366 [2024-12-10 04:15:33.619471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.366 [2024-12-10 04:15:33.619480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.367 [2024-12-10 04:15:33.621019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.367 [2024-12-10 04:15:33.621076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:39.367 [2024-12-10 04:15:33.621080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.367 [2024-12-10 04:15:33.706800] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:39.367 [2024-12-10 04:15:33.707001] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:39.367 [2024-12-10 04:15:33.707036] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:39.367 [2024-12-10 04:15:33.707258] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:39.367 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:39.367 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:27:39.367 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:39.367 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:39.367 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:39.627 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.627 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:39.627 [2024-12-10 04:15:34.005784] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.887 04:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:40.147 04:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:27:40.147 04:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:40.407 04:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:27:40.407 04:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:27:40.667 04:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:27:40.924 04:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bf6fc344-908d-4269-b7e6-3acfff788d10 00:27:40.924 04:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bf6fc344-908d-4269-b7e6-3acfff788d10 lvol 20 00:27:41.182 04:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=75b861d8-c836-4923-b044-9126fd0aad40 00:27:41.182 04:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:41.440 04:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 75b861d8-c836-4923-b044-9126fd0aad40 00:27:41.698 04:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:41.956 [2024-12-10 04:15:36.281932] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.956 04:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:42.214 04:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2525145 00:27:42.214 04:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:27:42.215 04:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:27:43.204 04:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 75b861d8-c836-4923-b044-9126fd0aad40 MY_SNAPSHOT 00:27:43.773 04:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8d6b00f5-6e29-48b4-afe6-02d9c4242a70 00:27:43.773 04:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 75b861d8-c836-4923-b044-9126fd0aad40 30 00:27:44.032 04:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8d6b00f5-6e29-48b4-afe6-02d9c4242a70 MY_CLONE 00:27:44.290 04:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e3b6374a-33ff-478d-ba4f-834d9b151e23 00:27:44.290 04:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e3b6374a-33ff-478d-ba4f-834d9b151e23 00:27:44.859 04:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2525145 00:27:52.980 Initializing NVMe Controllers 00:27:52.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:52.980 Controller IO queue size 128, less than required. 00:27:52.980 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:52.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:27:52.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:27:52.980 Initialization complete. Launching workers. 00:27:52.980 ======================================================== 00:27:52.980 Latency(us) 00:27:52.980 Device Information : IOPS MiB/s Average min max 00:27:52.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10330.80 40.35 12392.98 4334.68 59731.50 00:27:52.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10347.40 40.42 12374.85 3704.15 57294.79 00:27:52.980 ======================================================== 00:27:52.980 Total : 20678.20 80.77 12383.91 3704.15 59731.50 00:27:52.980 00:27:52.980 04:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:52.980 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 75b861d8-c836-4923-b044-9126fd0aad40 00:27:53.238 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bf6fc344-908d-4269-b7e6-3acfff788d10 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:53.496 rmmod nvme_tcp 00:27:53.496 rmmod nvme_fabrics 00:27:53.496 rmmod nvme_keyring 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2524835 ']' 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2524835 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2524835 ']' 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2524835 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2524835 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2524835' 00:27:53.496 killing process with pid 2524835 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2524835 00:27:53.496 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2524835 00:27:53.754 04:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:53.754 04:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:53.754 04:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:53.754 04:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:27:53.754 04:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:27:53.754 04:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:53.754 04:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:27:53.754 04:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:53.754 04:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:53.754 04:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.754 04:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.754 04:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:56.291 00:27:56.291 real 0m19.034s 00:27:56.291 user 0m55.263s 00:27:56.291 sys 0m8.195s 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:56.291 ************************************ 00:27:56.291 END TEST nvmf_lvol 00:27:56.291 ************************************ 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:56.291 ************************************ 00:27:56.291 START TEST nvmf_lvs_grow 00:27:56.291 ************************************ 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:56.291 * Looking for test storage... 00:27:56.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:27:56.291 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:56.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.292 --rc genhtml_branch_coverage=1 00:27:56.292 --rc genhtml_function_coverage=1 00:27:56.292 --rc genhtml_legend=1 00:27:56.292 --rc geninfo_all_blocks=1 00:27:56.292 --rc geninfo_unexecuted_blocks=1 00:27:56.292 00:27:56.292 ' 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:56.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.292 --rc genhtml_branch_coverage=1 00:27:56.292 --rc genhtml_function_coverage=1 00:27:56.292 --rc genhtml_legend=1 00:27:56.292 --rc geninfo_all_blocks=1 00:27:56.292 --rc geninfo_unexecuted_blocks=1 00:27:56.292 00:27:56.292 ' 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:56.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.292 --rc genhtml_branch_coverage=1 00:27:56.292 --rc genhtml_function_coverage=1 00:27:56.292 --rc genhtml_legend=1 00:27:56.292 --rc geninfo_all_blocks=1 00:27:56.292 --rc geninfo_unexecuted_blocks=1 00:27:56.292 00:27:56.292 ' 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:56.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.292 --rc genhtml_branch_coverage=1 00:27:56.292 --rc genhtml_function_coverage=1 00:27:56.292 --rc genhtml_legend=1 00:27:56.292 --rc geninfo_all_blocks=1 00:27:56.292 --rc geninfo_unexecuted_blocks=1 00:27:56.292 00:27:56.292 ' 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:27:56.292 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:58.194 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:58.194 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:58.194 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:58.194 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.194 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.195 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:58.195 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.195 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.195 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:58.195 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:58.195 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.195 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.195 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:58.195 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:58.195 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.195 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.453 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:58.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:27:58.454 00:27:58.454 --- 10.0.0.2 ping statistics --- 00:27:58.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.454 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:27:58.454 00:27:58.454 --- 10.0.0.1 ping statistics --- 00:27:58.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.454 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2528518 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2528518 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2528518 ']' 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.454 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:58.454 [2024-12-10 04:15:52.742028] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:58.454 [2024-12-10 04:15:52.743084] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:27:58.454 [2024-12-10 04:15:52.743136] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.454 [2024-12-10 04:15:52.816850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.714 [2024-12-10 04:15:52.875168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.714 [2024-12-10 04:15:52.875231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.714 [2024-12-10 04:15:52.875259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.714 [2024-12-10 04:15:52.875270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.714 [2024-12-10 04:15:52.875280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.714 [2024-12-10 04:15:52.875968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.714 [2024-12-10 04:15:52.973258] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:58.714 [2024-12-10 04:15:52.973572] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:58.714 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.714 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:27:58.714 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:58.714 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:58.714 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:58.714 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.714 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:58.974 [2024-12-10 04:15:53.280658] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.974 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:27:58.974 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:58.974 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:58.974 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:58.974 ************************************ 00:27:58.974 START TEST lvs_grow_clean 00:27:58.974 ************************************ 00:27:58.974 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:27:58.974 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:58.974 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:58.974 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:58.974 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:58.974 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:58.974 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:58.974 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:58.974 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:58.974 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:59.541 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:59.541 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:59.541 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f44f015b-6f5b-450e-ab55-4253318c7948 00:27:59.541 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f44f015b-6f5b-450e-ab55-4253318c7948 00:27:59.541 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:59.800 04:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:59.800 04:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:59.800 04:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f44f015b-6f5b-450e-ab55-4253318c7948 lvol 150 00:28:00.367 04:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=bf8116eb-b7f5-415e-80cb-e1fa7e003134 00:28:00.367 04:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:00.367 04:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:00.367 [2024-12-10 04:15:54.716511] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:00.367 [2024-12-10 04:15:54.716657] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:00.367 true 00:28:00.367 04:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f44f015b-6f5b-450e-ab55-4253318c7948 00:28:00.367 04:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:00.626 04:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:00.895 04:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:01.155 04:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bf8116eb-b7f5-415e-80cb-e1fa7e003134 00:28:01.414 04:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:01.673 [2024-12-10 04:15:55.817034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.673 04:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:01.931 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2528959 00:28:01.931 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:01.931 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:01.931 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2528959 /var/tmp/bdevperf.sock 00:28:01.931 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2528959 ']' 00:28:01.931 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:01.931 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:01.931 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:01.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:01.931 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:01.931 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:01.931 [2024-12-10 04:15:56.154781] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:28:01.931 [2024-12-10 04:15:56.154873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528959 ] 00:28:01.931 [2024-12-10 04:15:56.221766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.931 [2024-12-10 04:15:56.279778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.189 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:02.189 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:28:02.189 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:02.447 Nvme0n1 00:28:02.447 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:02.706 [ 00:28:02.706 { 00:28:02.706 "name": "Nvme0n1", 00:28:02.706 "aliases": [ 00:28:02.706 "bf8116eb-b7f5-415e-80cb-e1fa7e003134" 00:28:02.706 ], 00:28:02.706 "product_name": "NVMe disk", 00:28:02.706 "block_size": 4096, 00:28:02.706 "num_blocks": 38912, 00:28:02.706 "uuid": "bf8116eb-b7f5-415e-80cb-e1fa7e003134", 00:28:02.706 "numa_id": 0, 00:28:02.706 "assigned_rate_limits": { 00:28:02.706 "rw_ios_per_sec": 0, 00:28:02.706 "rw_mbytes_per_sec": 0, 00:28:02.706 "r_mbytes_per_sec": 0, 00:28:02.706 "w_mbytes_per_sec": 0 00:28:02.706 }, 00:28:02.706 "claimed": false, 00:28:02.706 "zoned": false, 00:28:02.706 "supported_io_types": { 00:28:02.706 "read": true, 00:28:02.706 "write": true, 00:28:02.706 "unmap": true, 00:28:02.706 "flush": true, 00:28:02.706 "reset": true, 00:28:02.706 "nvme_admin": true, 00:28:02.706 "nvme_io": true, 00:28:02.706 "nvme_io_md": false, 00:28:02.706 "write_zeroes": true, 00:28:02.706 "zcopy": false, 00:28:02.706 "get_zone_info": false, 00:28:02.706 "zone_management": false, 00:28:02.706 "zone_append": false, 00:28:02.706 "compare": true, 00:28:02.706 "compare_and_write": true, 00:28:02.706 "abort": true, 00:28:02.706 "seek_hole": false, 00:28:02.706 "seek_data": false, 00:28:02.706 "copy": true, 00:28:02.706 "nvme_iov_md": false 00:28:02.706 }, 00:28:02.706 "memory_domains": [ 00:28:02.706 { 00:28:02.706 "dma_device_id": "system", 00:28:02.706 "dma_device_type": 1 00:28:02.706 } 00:28:02.706 ], 00:28:02.706 "driver_specific": { 00:28:02.706 "nvme": [ 00:28:02.706 { 00:28:02.706 "trid": { 00:28:02.706 "trtype": "TCP", 00:28:02.706 "adrfam": "IPv4", 00:28:02.706 "traddr": "10.0.0.2", 00:28:02.706 "trsvcid": "4420", 00:28:02.706 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:02.706 }, 00:28:02.706 "ctrlr_data": { 00:28:02.706 "cntlid": 1, 00:28:02.706 "vendor_id": "0x8086", 00:28:02.706 "model_number": "SPDK bdev Controller", 00:28:02.706 "serial_number": "SPDK0", 00:28:02.706 "firmware_revision": "25.01", 00:28:02.706 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:02.706 "oacs": { 00:28:02.706 "security": 0, 00:28:02.706 "format": 0, 00:28:02.706 "firmware": 0, 00:28:02.706 "ns_manage": 0 00:28:02.706 }, 00:28:02.706 "multi_ctrlr": true, 00:28:02.706 "ana_reporting": false 00:28:02.706 }, 00:28:02.706 "vs": { 00:28:02.706 "nvme_version": "1.3" 00:28:02.706 }, 00:28:02.706 "ns_data": { 00:28:02.706 "id": 1, 00:28:02.706 "can_share": true 00:28:02.706 } 00:28:02.706 } 00:28:02.706 ], 00:28:02.706 "mp_policy": "active_passive" 00:28:02.706 } 00:28:02.706 } 00:28:02.706 ] 00:28:02.706 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2529003 00:28:02.706 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:02.706 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:02.966 Running I/O for 10 seconds... 00:28:03.903 Latency(us) 00:28:03.903 [2024-12-10T03:15:58.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:03.903 Nvme0n1 : 1.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:28:03.903 [2024-12-10T03:15:58.292Z] =================================================================================================================== 00:28:03.903 [2024-12-10T03:15:58.292Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:28:03.903 00:28:04.839 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f44f015b-6f5b-450e-ab55-4253318c7948 00:28:04.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:04.839 Nvme0n1 : 2.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:28:04.839 [2024-12-10T03:15:59.228Z] =================================================================================================================== 00:28:04.839 [2024-12-10T03:15:59.228Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:28:04.839 00:28:05.099 true 00:28:05.099 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f44f015b-6f5b-450e-ab55-4253318c7948 00:28:05.099 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:05.358 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:05.358 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:05.358 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2529003 00:28:05.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:05.927 Nvme0n1 : 3.00 14943.67 58.37 0.00 0.00 0.00 0.00 0.00 00:28:05.927 [2024-12-10T03:16:00.316Z] =================================================================================================================== 00:28:05.927 [2024-12-10T03:16:00.316Z] Total : 14943.67 58.37 0.00 0.00 0.00 0.00 0.00 00:28:05.927 00:28:06.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:06.863 Nvme0n1 : 4.00 15017.75 58.66 0.00 0.00 0.00 0.00 0.00 00:28:06.863 [2024-12-10T03:16:01.252Z] =================================================================================================================== 00:28:06.863 [2024-12-10T03:16:01.252Z] Total : 15017.75 58.66 0.00 0.00 0.00 0.00 0.00 00:28:06.863 00:28:07.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:07.795 Nvme0n1 : 5.00 15119.80 59.06 0.00 0.00 0.00 0.00 0.00 00:28:07.795 [2024-12-10T03:16:02.184Z] =================================================================================================================== 00:28:07.795 [2024-12-10T03:16:02.184Z] Total : 15119.80 59.06 0.00 0.00 0.00 0.00 0.00 00:28:07.795 00:28:09.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:09.172 Nvme0n1 : 6.00 15192.83 59.35 0.00 0.00 0.00 0.00 0.00 00:28:09.172 [2024-12-10T03:16:03.561Z] =================================================================================================================== 00:28:09.172 [2024-12-10T03:16:03.561Z] Total : 15192.83 59.35 0.00 0.00 0.00 0.00 0.00 00:28:09.172 00:28:10.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:10.107 Nvme0n1 : 7.00 15244.86 59.55 0.00 0.00 0.00 0.00 0.00 00:28:10.107 [2024-12-10T03:16:04.496Z] =================================================================================================================== 00:28:10.107 [2024-12-10T03:16:04.496Z] Total : 15244.86 59.55 0.00 0.00 0.00 0.00 0.00 00:28:10.107 00:28:11.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:11.043 Nvme0n1 : 8.00 15291.88 59.73 0.00 0.00 0.00 0.00 0.00 00:28:11.043 [2024-12-10T03:16:05.432Z] =================================================================================================================== 00:28:11.043 [2024-12-10T03:16:05.432Z] Total : 15291.88 59.73 0.00 0.00 0.00 0.00 0.00 00:28:11.043 00:28:11.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:11.983 Nvme0n1 : 9.00 15328.44 59.88 0.00 0.00 0.00 0.00 0.00 00:28:11.983 [2024-12-10T03:16:06.372Z] =================================================================================================================== 00:28:11.983 [2024-12-10T03:16:06.372Z] Total : 15328.44 59.88 0.00 0.00 0.00 0.00 0.00 00:28:11.983 00:28:12.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:12.923 Nvme0n1 : 10.00 15342.10 59.93 0.00 0.00 0.00 0.00 0.00 00:28:12.923 [2024-12-10T03:16:07.312Z] =================================================================================================================== 00:28:12.923 [2024-12-10T03:16:07.312Z] Total : 15342.10 59.93 0.00 0.00 0.00 0.00 0.00 00:28:12.923 00:28:12.923 00:28:12.923 Latency(us) 00:28:12.923 [2024-12-10T03:16:07.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:12.923 Nvme0n1 : 10.00 15339.67 59.92 0.00 0.00 8338.78 4344.79 20486.07 00:28:12.923 [2024-12-10T03:16:07.312Z] =================================================================================================================== 00:28:12.923 [2024-12-10T03:16:07.312Z] Total : 15339.67 59.92 0.00 0.00 8338.78 4344.79 20486.07 00:28:12.923 { 00:28:12.923 "results": [ 00:28:12.923 { 00:28:12.923 "job": "Nvme0n1", 00:28:12.923 "core_mask": "0x2", 00:28:12.923 "workload": "randwrite", 00:28:12.923 "status": "finished", 00:28:12.923 "queue_depth": 128, 00:28:12.923 "io_size": 4096, 00:28:12.923 "runtime": 10.003541, 00:28:12.923 "iops": 15339.668223482066, 00:28:12.923 "mibps": 59.92057899797682, 00:28:12.923 "io_failed": 0, 00:28:12.923 "io_timeout": 0, 00:28:12.923 "avg_latency_us": 8338.778234383903, 00:28:12.923 "min_latency_us": 4344.794074074074, 00:28:12.923 "max_latency_us": 20486.068148148148 00:28:12.923 } 00:28:12.923 ], 00:28:12.923 "core_count": 1 00:28:12.923 } 00:28:12.923 04:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2528959 00:28:12.923 04:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2528959 ']' 00:28:12.923 04:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2528959 00:28:12.923 04:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:28:12.923 04:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:12.923 04:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2528959 00:28:12.923 04:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:12.923 04:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:12.923 04:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2528959' 00:28:12.923 killing process with pid 2528959 00:28:12.923 04:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2528959 00:28:12.923 Received shutdown signal, test time was about 10.000000 seconds 00:28:12.923 00:28:12.923 Latency(us) 00:28:12.923 [2024-12-10T03:16:07.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.923 [2024-12-10T03:16:07.312Z] =================================================================================================================== 00:28:12.923 [2024-12-10T03:16:07.312Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:12.923 04:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2528959 00:28:13.183 04:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:13.443 04:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:13.702 04:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f44f015b-6f5b-450e-ab55-4253318c7948 00:28:13.702 04:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:13.960 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:13.960 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:28:13.960 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:14.218 [2024-12-10 04:16:08.504581] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:14.218 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f44f015b-6f5b-450e-ab55-4253318c7948 00:28:14.218 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:28:14.218 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f44f015b-6f5b-450e-ab55-4253318c7948 00:28:14.218 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:14.218 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:14.218 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:14.218 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:14.218 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:14.218 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:14.218 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:14.218 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:14.218 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f44f015b-6f5b-450e-ab55-4253318c7948 00:28:14.476 request: 00:28:14.476 { 00:28:14.476 "uuid": "f44f015b-6f5b-450e-ab55-4253318c7948", 00:28:14.476 "method": "bdev_lvol_get_lvstores", 00:28:14.476 "req_id": 1 00:28:14.476 } 00:28:14.476 Got JSON-RPC error response 00:28:14.476 response: 00:28:14.476 { 00:28:14.476 "code": -19, 00:28:14.476 "message": "No such device" 00:28:14.476 } 00:28:14.476 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:28:14.476 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:14.476 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:14.476 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:14.476 04:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:14.736 aio_bdev 00:28:14.736 04:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bf8116eb-b7f5-415e-80cb-e1fa7e003134 00:28:14.736 04:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=bf8116eb-b7f5-415e-80cb-e1fa7e003134 00:28:14.736 04:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:14.736 04:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:28:14.736 04:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:14.736 04:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:14.736 04:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:14.994 04:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bf8116eb-b7f5-415e-80cb-e1fa7e003134 -t 2000 00:28:15.253 [ 00:28:15.253 { 00:28:15.253 "name": "bf8116eb-b7f5-415e-80cb-e1fa7e003134", 00:28:15.253 "aliases": [ 00:28:15.253 "lvs/lvol" 00:28:15.253 ], 00:28:15.253 "product_name": "Logical Volume", 00:28:15.253 "block_size": 4096, 00:28:15.253 "num_blocks": 38912, 00:28:15.253 "uuid": "bf8116eb-b7f5-415e-80cb-e1fa7e003134", 00:28:15.253 "assigned_rate_limits": { 00:28:15.253 "rw_ios_per_sec": 0, 00:28:15.253 "rw_mbytes_per_sec": 0, 00:28:15.253 "r_mbytes_per_sec": 0, 00:28:15.253 "w_mbytes_per_sec": 0 00:28:15.253 }, 00:28:15.253 "claimed": false, 00:28:15.253 "zoned": false, 00:28:15.253 "supported_io_types": { 00:28:15.253 "read": true, 00:28:15.253 "write": true, 00:28:15.253 "unmap": true, 00:28:15.253 "flush": false, 00:28:15.253 "reset": true, 00:28:15.253 "nvme_admin": false, 00:28:15.253 "nvme_io": false, 00:28:15.253 "nvme_io_md": false, 00:28:15.253 "write_zeroes": true, 00:28:15.253 "zcopy": false, 00:28:15.253 "get_zone_info": false, 00:28:15.253 "zone_management": false, 00:28:15.253 "zone_append": false, 00:28:15.253 "compare": false, 00:28:15.253 "compare_and_write": false, 00:28:15.253 "abort": false, 00:28:15.253 "seek_hole": true, 00:28:15.253 "seek_data": true, 00:28:15.253 "copy": false, 00:28:15.253 "nvme_iov_md": false 00:28:15.253 }, 00:28:15.253 "driver_specific": { 00:28:15.253 "lvol": { 00:28:15.253 "lvol_store_uuid": "f44f015b-6f5b-450e-ab55-4253318c7948", 00:28:15.253 "base_bdev": "aio_bdev", 00:28:15.253 "thin_provision": false, 00:28:15.253 "num_allocated_clusters": 38, 00:28:15.253 "snapshot": false, 00:28:15.253 "clone": false, 00:28:15.253 "esnap_clone": false 00:28:15.253 } 00:28:15.253 } 00:28:15.253 } 00:28:15.253 ] 00:28:15.253 04:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:28:15.253 04:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f44f015b-6f5b-450e-ab55-4253318c7948 00:28:15.253 04:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:15.513 04:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:15.513 04:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f44f015b-6f5b-450e-ab55-4253318c7948 00:28:15.513 04:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:15.772 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:15.772 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bf8116eb-b7f5-415e-80cb-e1fa7e003134 00:28:16.338 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f44f015b-6f5b-450e-ab55-4253318c7948 00:28:16.597 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:16.857 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:16.857 00:28:16.857 real 0m17.719s 00:28:16.857 user 0m17.363s 00:28:16.857 sys 0m1.817s 00:28:16.857 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.857 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:16.857 ************************************ 00:28:16.857 END TEST lvs_grow_clean 00:28:16.857 ************************************ 00:28:16.857 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:28:16.857 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:16.858 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.858 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:16.858 ************************************ 00:28:16.858 START TEST lvs_grow_dirty 00:28:16.858 ************************************ 00:28:16.858 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:28:16.858 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:16.858 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:16.858 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:16.858 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:16.858 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:16.858 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:16.858 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:16.858 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:16.858 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:17.118 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:17.118 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:17.377 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e10262a8-c75f-4616-baf1-eca86ec9e510 00:28:17.377 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10262a8-c75f-4616-baf1-eca86ec9e510 00:28:17.377 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:17.637 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:17.637 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:17.637 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e10262a8-c75f-4616-baf1-eca86ec9e510 lvol 150 00:28:17.897 04:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=376320c3-ae5e-417c-850a-9b8967bb8f1f 00:28:17.897 04:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:17.897 04:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:18.155 [2024-12-10 04:16:12.480519] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:18.155 [2024-12-10 04:16:12.480673] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:18.155 true 00:28:18.155 04:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10262a8-c75f-4616-baf1-eca86ec9e510 00:28:18.155 04:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:18.413 04:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:18.413 04:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:18.673 04:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 376320c3-ae5e-417c-850a-9b8967bb8f1f 00:28:19.243 04:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:19.243 [2024-12-10 04:16:13.576872] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.243 04:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:19.501 04:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2530993 00:28:19.501 04:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:19.501 04:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:19.501 04:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2530993 /var/tmp/bdevperf.sock 00:28:19.501 04:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2530993 ']' 00:28:19.501 04:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:19.501 04:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.501 04:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:19.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:19.501 04:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.501 04:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:19.759 [2024-12-10 04:16:13.906749] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:28:19.759 [2024-12-10 04:16:13.906853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2530993 ] 00:28:19.759 [2024-12-10 04:16:13.975224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.759 [2024-12-10 04:16:14.038765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.017 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.017 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:20.017 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:20.275 Nvme0n1 00:28:20.275 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:20.533 [ 00:28:20.533 { 00:28:20.533 "name": "Nvme0n1", 00:28:20.533 "aliases": [ 00:28:20.533 "376320c3-ae5e-417c-850a-9b8967bb8f1f" 00:28:20.533 ], 00:28:20.533 "product_name": "NVMe disk", 00:28:20.533 "block_size": 4096, 00:28:20.533 "num_blocks": 38912, 00:28:20.533 "uuid": "376320c3-ae5e-417c-850a-9b8967bb8f1f", 00:28:20.533 "numa_id": 0, 00:28:20.533 "assigned_rate_limits": { 00:28:20.533 "rw_ios_per_sec": 0, 00:28:20.533 "rw_mbytes_per_sec": 0, 00:28:20.533 "r_mbytes_per_sec": 0, 00:28:20.533 "w_mbytes_per_sec": 0 00:28:20.533 }, 00:28:20.533 "claimed": false, 00:28:20.533 "zoned": false, 00:28:20.533 "supported_io_types": { 00:28:20.533 "read": true, 00:28:20.533 "write": true, 00:28:20.533 "unmap": true, 00:28:20.533 "flush": true, 00:28:20.533 "reset": true, 00:28:20.533 "nvme_admin": true, 00:28:20.533 "nvme_io": true, 00:28:20.533 "nvme_io_md": false, 00:28:20.533 "write_zeroes": true, 00:28:20.533 "zcopy": false, 00:28:20.533 "get_zone_info": false, 00:28:20.533 "zone_management": false, 00:28:20.533 "zone_append": false, 00:28:20.533 "compare": true, 00:28:20.533 "compare_and_write": true, 00:28:20.533 "abort": true, 00:28:20.533 "seek_hole": false, 00:28:20.533 "seek_data": false, 00:28:20.533 "copy": true, 00:28:20.533 "nvme_iov_md": false 00:28:20.533 }, 00:28:20.533 "memory_domains": [ 00:28:20.533 { 00:28:20.533 "dma_device_id": "system", 00:28:20.533 "dma_device_type": 1 00:28:20.533 } 00:28:20.533 ], 00:28:20.533 "driver_specific": { 00:28:20.533 "nvme": [ 00:28:20.533 { 00:28:20.533 "trid": { 00:28:20.533 "trtype": "TCP", 00:28:20.533 "adrfam": "IPv4", 00:28:20.533 "traddr": "10.0.0.2", 00:28:20.533 "trsvcid": "4420", 00:28:20.533 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:20.533 }, 00:28:20.533 "ctrlr_data": { 00:28:20.533 "cntlid": 1, 00:28:20.533 "vendor_id": "0x8086", 00:28:20.533 "model_number": "SPDK bdev Controller", 00:28:20.533 "serial_number": "SPDK0", 00:28:20.533 "firmware_revision": "25.01", 00:28:20.533 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:20.533 "oacs": { 00:28:20.533 "security": 0, 00:28:20.533 "format": 0, 00:28:20.533 "firmware": 0, 00:28:20.533 "ns_manage": 0 00:28:20.533 }, 00:28:20.533 "multi_ctrlr": true, 00:28:20.533 "ana_reporting": false 00:28:20.533 }, 00:28:20.533 "vs": { 00:28:20.533 "nvme_version": "1.3" 00:28:20.533 }, 00:28:20.533 "ns_data": { 00:28:20.533 "id": 1, 00:28:20.533 "can_share": true 00:28:20.533 } 00:28:20.533 } 00:28:20.533 ], 00:28:20.533 "mp_policy": "active_passive" 00:28:20.533 } 00:28:20.533 } 00:28:20.533 ] 00:28:20.533 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2531128 00:28:20.533 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:20.533 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:20.844 Running I/O for 10 seconds... 00:28:21.866 Latency(us) 00:28:21.866 [2024-12-10T03:16:16.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:21.866 Nvme0n1 : 1.00 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:28:21.866 [2024-12-10T03:16:16.255Z] =================================================================================================================== 00:28:21.866 [2024-12-10T03:16:16.255Z] Total : 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:28:21.866 00:28:22.801 04:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e10262a8-c75f-4616-baf1-eca86ec9e510 00:28:22.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:22.801 Nvme0n1 : 2.00 14922.50 58.29 0.00 0.00 0.00 0.00 0.00 00:28:22.801 [2024-12-10T03:16:17.190Z] =================================================================================================================== 00:28:22.801 [2024-12-10T03:16:17.190Z] Total : 14922.50 58.29 0.00 0.00 0.00 0.00 0.00 00:28:22.801 00:28:22.801 true 00:28:22.801 04:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10262a8-c75f-4616-baf1-eca86ec9e510 00:28:22.801 04:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:23.369 04:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:23.369 04:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:23.370 04:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2531128 00:28:23.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:23.940 Nvme0n1 : 3.00 15028.33 58.70 0.00 0.00 0.00 0.00 0.00 00:28:23.940 [2024-12-10T03:16:18.329Z] =================================================================================================================== 00:28:23.940 [2024-12-10T03:16:18.329Z] Total : 15028.33 58.70 0.00 0.00 0.00 0.00 0.00 00:28:23.940 00:28:24.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:24.881 Nvme0n1 : 4.00 15081.25 58.91 0.00 0.00 0.00 0.00 0.00 00:28:24.881 [2024-12-10T03:16:19.270Z] =================================================================================================================== 00:28:24.881 [2024-12-10T03:16:19.270Z] Total : 15081.25 58.91 0.00 0.00 0.00 0.00 0.00 00:28:24.881 00:28:25.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:25.820 Nvme0n1 : 5.00 15151.20 59.18 0.00 0.00 0.00 0.00 0.00 00:28:25.820 [2024-12-10T03:16:20.209Z] =================================================================================================================== 00:28:25.820 [2024-12-10T03:16:20.209Z] Total : 15151.20 59.18 0.00 0.00 0.00 0.00 0.00 00:28:25.820 00:28:26.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:26.759 Nvme0n1 : 6.00 15155.33 59.20 0.00 0.00 0.00 0.00 0.00 00:28:26.759 [2024-12-10T03:16:21.148Z] =================================================================================================================== 00:28:26.759 [2024-12-10T03:16:21.148Z] Total : 15155.33 59.20 0.00 0.00 0.00 0.00 0.00 00:28:26.759 00:28:27.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:27.699 Nvme0n1 : 7.00 15203.71 59.39 0.00 0.00 0.00 0.00 0.00 00:28:27.699 [2024-12-10T03:16:22.088Z] =================================================================================================================== 00:28:27.699 [2024-12-10T03:16:22.088Z] Total : 15203.71 59.39 0.00 0.00 0.00 0.00 0.00 00:28:27.699 00:28:29.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:29.083 Nvme0n1 : 8.00 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:28:29.083 [2024-12-10T03:16:23.472Z] =================================================================================================================== 00:28:29.083 [2024-12-10T03:16:23.472Z] Total : 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:28:29.083 00:28:30.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:30.023 Nvme0n1 : 9.00 15272.00 59.66 0.00 0.00 0.00 0.00 0.00 00:28:30.023 [2024-12-10T03:16:24.412Z] =================================================================================================================== 00:28:30.023 [2024-12-10T03:16:24.412Z] Total : 15272.00 59.66 0.00 0.00 0.00 0.00 0.00 00:28:30.023 00:28:30.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:30.956 Nvme0n1 : 10.00 15306.90 59.79 0.00 0.00 0.00 0.00 0.00 00:28:30.956 [2024-12-10T03:16:25.345Z] =================================================================================================================== 00:28:30.956 [2024-12-10T03:16:25.345Z] Total : 15306.90 59.79 0.00 0.00 0.00 0.00 0.00 00:28:30.956 00:28:30.956 00:28:30.956 Latency(us) 00:28:30.956 [2024-12-10T03:16:25.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:30.956 Nvme0n1 : 10.00 15312.36 59.81 0.00 0.00 8354.68 4320.52 18641.35 00:28:30.956 [2024-12-10T03:16:25.345Z] =================================================================================================================== 00:28:30.956 [2024-12-10T03:16:25.345Z] Total : 15312.36 59.81 0.00 0.00 8354.68 4320.52 18641.35 00:28:30.956 { 00:28:30.956 "results": [ 00:28:30.956 { 00:28:30.956 "job": "Nvme0n1", 00:28:30.956 "core_mask": "0x2", 00:28:30.956 "workload": "randwrite", 00:28:30.956 "status": "finished", 00:28:30.956 "queue_depth": 128, 00:28:30.956 "io_size": 4096, 00:28:30.956 "runtime": 10.004792, 00:28:30.956 "iops": 15312.362315978184, 00:28:30.956 "mibps": 59.81391529678978, 00:28:30.956 "io_failed": 0, 00:28:30.956 "io_timeout": 0, 00:28:30.956 "avg_latency_us": 8354.678671543466, 00:28:30.956 "min_latency_us": 4320.521481481482, 00:28:30.956 "max_latency_us": 18641.35111111111 00:28:30.956 } 00:28:30.956 ], 00:28:30.956 "core_count": 1 00:28:30.956 } 00:28:30.956 04:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2530993 00:28:30.956 04:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2530993 ']' 00:28:30.956 04:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2530993 00:28:30.956 04:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:28:30.956 04:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.956 04:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2530993 00:28:30.956 04:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:30.956 04:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:30.956 04:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2530993' 00:28:30.957 killing process with pid 2530993 00:28:30.957 04:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2530993 00:28:30.957 Received shutdown signal, test time was about 10.000000 seconds 00:28:30.957 00:28:30.957 Latency(us) 00:28:30.957 [2024-12-10T03:16:25.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.957 [2024-12-10T03:16:25.346Z] =================================================================================================================== 00:28:30.957 [2024-12-10T03:16:25.346Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.957 04:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2530993 00:28:30.957 04:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:31.215 04:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:31.785 04:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10262a8-c75f-4616-baf1-eca86ec9e510 00:28:31.785 04:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:31.785 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:31.785 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:28:31.785 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2528518 00:28:31.785 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2528518 00:28:32.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2528518 Killed "${NVMF_APP[@]}" "$@" 00:28:32.045 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:28:32.045 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:28:32.045 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:32.045 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:32.045 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:32.045 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2532447 00:28:32.045 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:32.045 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2532447 00:28:32.045 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2532447 ']' 00:28:32.045 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.045 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.045 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.045 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.045 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:32.045 [2024-12-10 04:16:26.236318] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:32.045 [2024-12-10 04:16:26.237403] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:28:32.045 [2024-12-10 04:16:26.237483] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.045 [2024-12-10 04:16:26.313160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.045 [2024-12-10 04:16:26.370725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.045 [2024-12-10 04:16:26.370791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.045 [2024-12-10 04:16:26.370821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.045 [2024-12-10 04:16:26.370832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.045 [2024-12-10 04:16:26.370842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.045 [2024-12-10 04:16:26.371440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.304 [2024-12-10 04:16:26.467188] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:32.304 [2024-12-10 04:16:26.467476] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:32.304 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.304 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:32.304 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:32.304 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:32.304 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:32.304 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.304 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:32.562 [2024-12-10 04:16:26.782121] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:32.562 [2024-12-10 04:16:26.782261] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:32.562 [2024-12-10 04:16:26.782310] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:32.562 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:28:32.562 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 376320c3-ae5e-417c-850a-9b8967bb8f1f 00:28:32.562 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=376320c3-ae5e-417c-850a-9b8967bb8f1f 00:28:32.562 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:32.562 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:32.562 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:32.562 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:32.562 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:32.820 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 376320c3-ae5e-417c-850a-9b8967bb8f1f -t 2000 00:28:33.080 [ 00:28:33.080 { 00:28:33.080 "name": "376320c3-ae5e-417c-850a-9b8967bb8f1f", 00:28:33.080 "aliases": [ 00:28:33.080 "lvs/lvol" 00:28:33.080 ], 00:28:33.080 "product_name": "Logical Volume", 00:28:33.080 "block_size": 4096, 00:28:33.080 "num_blocks": 38912, 00:28:33.080 "uuid": "376320c3-ae5e-417c-850a-9b8967bb8f1f", 00:28:33.080 "assigned_rate_limits": { 00:28:33.080 "rw_ios_per_sec": 0, 00:28:33.080 "rw_mbytes_per_sec": 0, 00:28:33.080 "r_mbytes_per_sec": 0, 00:28:33.080 "w_mbytes_per_sec": 0 00:28:33.080 }, 00:28:33.080 "claimed": false, 00:28:33.080 "zoned": false, 00:28:33.080 "supported_io_types": { 00:28:33.080 "read": true, 00:28:33.080 "write": true, 00:28:33.080 "unmap": true, 00:28:33.080 "flush": false, 00:28:33.080 "reset": true, 00:28:33.080 "nvme_admin": false, 00:28:33.080 "nvme_io": false, 00:28:33.080 "nvme_io_md": false, 00:28:33.080 "write_zeroes": true, 00:28:33.080 "zcopy": false, 00:28:33.080 "get_zone_info": false, 00:28:33.080 "zone_management": false, 00:28:33.080 "zone_append": false, 00:28:33.080 "compare": false, 00:28:33.080 "compare_and_write": false, 00:28:33.080 "abort": false, 00:28:33.080 "seek_hole": true, 00:28:33.080 "seek_data": true, 00:28:33.080 "copy": false, 00:28:33.080 "nvme_iov_md": false 00:28:33.080 }, 00:28:33.080 "driver_specific": { 00:28:33.080 "lvol": { 00:28:33.080 "lvol_store_uuid": "e10262a8-c75f-4616-baf1-eca86ec9e510", 00:28:33.080 "base_bdev": "aio_bdev", 00:28:33.080 "thin_provision": false, 00:28:33.080 "num_allocated_clusters": 38, 00:28:33.080 "snapshot": false, 00:28:33.080 "clone": false, 00:28:33.080 "esnap_clone": false 00:28:33.080 } 00:28:33.080 } 00:28:33.080 } 00:28:33.080 ] 00:28:33.080 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:33.080 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10262a8-c75f-4616-baf1-eca86ec9e510 00:28:33.080 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:28:33.340 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:28:33.340 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10262a8-c75f-4616-baf1-eca86ec9e510 00:28:33.340 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:28:33.601 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:28:33.601 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:33.861 [2024-12-10 04:16:28.144047] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:33.861 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10262a8-c75f-4616-baf1-eca86ec9e510 00:28:33.861 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:28:33.861 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10262a8-c75f-4616-baf1-eca86ec9e510 00:28:33.861 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:33.861 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:33.861 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:33.861 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:33.861 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:33.861 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:33.861 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:33.861 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:33.861 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10262a8-c75f-4616-baf1-eca86ec9e510 00:28:34.121 request: 00:28:34.121 { 00:28:34.121 "uuid": "e10262a8-c75f-4616-baf1-eca86ec9e510", 00:28:34.121 "method": "bdev_lvol_get_lvstores", 00:28:34.121 "req_id": 1 00:28:34.121 } 00:28:34.121 Got JSON-RPC error response 00:28:34.121 response: 00:28:34.121 { 00:28:34.121 "code": -19, 00:28:34.121 "message": "No such device" 00:28:34.121 } 00:28:34.121 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:28:34.121 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:34.121 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:34.121 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:34.121 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:34.380 aio_bdev 00:28:34.380 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 376320c3-ae5e-417c-850a-9b8967bb8f1f 00:28:34.380 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=376320c3-ae5e-417c-850a-9b8967bb8f1f 00:28:34.380 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:34.380 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:34.380 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:34.380 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:34.380 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:34.639 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 376320c3-ae5e-417c-850a-9b8967bb8f1f -t 2000 00:28:34.897 [ 00:28:34.897 { 00:28:34.897 "name": "376320c3-ae5e-417c-850a-9b8967bb8f1f", 00:28:34.897 "aliases": [ 00:28:34.897 "lvs/lvol" 00:28:34.897 ], 00:28:34.897 "product_name": "Logical Volume", 00:28:34.897 "block_size": 4096, 00:28:34.897 "num_blocks": 38912, 00:28:34.897 "uuid": "376320c3-ae5e-417c-850a-9b8967bb8f1f", 00:28:34.897 "assigned_rate_limits": { 00:28:34.897 "rw_ios_per_sec": 0, 00:28:34.897 "rw_mbytes_per_sec": 0, 00:28:34.897 "r_mbytes_per_sec": 0, 00:28:34.897 "w_mbytes_per_sec": 0 00:28:34.897 }, 00:28:34.897 "claimed": false, 00:28:34.897 "zoned": false, 00:28:34.897 "supported_io_types": { 00:28:34.897 "read": true, 00:28:34.897 "write": true, 00:28:34.897 "unmap": true, 00:28:34.897 "flush": false, 00:28:34.897 "reset": true, 00:28:34.897 "nvme_admin": false, 00:28:34.897 "nvme_io": false, 00:28:34.897 "nvme_io_md": false, 00:28:34.897 "write_zeroes": true, 00:28:34.897 "zcopy": false, 00:28:34.897 "get_zone_info": false, 00:28:34.897 "zone_management": false, 00:28:34.898 "zone_append": false, 00:28:34.898 "compare": false, 00:28:34.898 "compare_and_write": false, 00:28:34.898 "abort": false, 00:28:34.898 "seek_hole": true, 00:28:34.898 "seek_data": true, 00:28:34.898 "copy": false, 00:28:34.898 "nvme_iov_md": false 00:28:34.898 }, 00:28:34.898 "driver_specific": { 00:28:34.898 "lvol": { 00:28:34.898 "lvol_store_uuid": "e10262a8-c75f-4616-baf1-eca86ec9e510", 00:28:34.898 "base_bdev": "aio_bdev", 00:28:34.898 "thin_provision": false, 00:28:34.898 "num_allocated_clusters": 38, 00:28:34.898 "snapshot": false, 00:28:34.898 "clone": false, 00:28:34.898 "esnap_clone": false 00:28:34.898 } 00:28:34.898 } 00:28:34.898 } 00:28:34.898 ] 00:28:34.898 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:34.898 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10262a8-c75f-4616-baf1-eca86ec9e510 00:28:34.898 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:35.158 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:35.158 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e10262a8-c75f-4616-baf1-eca86ec9e510 00:28:35.158 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:35.728 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:35.728 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 376320c3-ae5e-417c-850a-9b8967bb8f1f 00:28:35.728 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e10262a8-c75f-4616-baf1-eca86ec9e510 00:28:36.296 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:36.296 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:36.555 00:28:36.555 real 0m19.601s 00:28:36.555 user 0m36.685s 00:28:36.555 sys 0m4.719s 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:36.555 ************************************ 00:28:36.555 END TEST lvs_grow_dirty 00:28:36.555 ************************************ 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:36.555 nvmf_trace.0 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:36.555 rmmod nvme_tcp 00:28:36.555 rmmod nvme_fabrics 00:28:36.555 rmmod nvme_keyring 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2532447 ']' 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2532447 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2532447 ']' 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2532447 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2532447 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2532447' 00:28:36.555 killing process with pid 2532447 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2532447 00:28:36.555 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2532447 00:28:36.813 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:36.813 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:36.813 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:36.813 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:28:36.813 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:28:36.813 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:36.813 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:28:36.813 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:36.813 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:36.813 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.813 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.813 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.719 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:38.979 00:28:38.979 real 0m42.906s 00:28:38.979 user 0m55.872s 00:28:38.979 sys 0m8.594s 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:38.979 ************************************ 00:28:38.979 END TEST nvmf_lvs_grow 00:28:38.979 ************************************ 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:38.979 ************************************ 00:28:38.979 START TEST nvmf_bdev_io_wait 00:28:38.979 ************************************ 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:38.979 * Looking for test storage... 00:28:38.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:38.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.979 --rc genhtml_branch_coverage=1 00:28:38.979 --rc genhtml_function_coverage=1 00:28:38.979 --rc genhtml_legend=1 00:28:38.979 --rc geninfo_all_blocks=1 00:28:38.979 --rc geninfo_unexecuted_blocks=1 00:28:38.979 00:28:38.979 ' 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:38.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.979 --rc genhtml_branch_coverage=1 00:28:38.979 --rc genhtml_function_coverage=1 00:28:38.979 --rc genhtml_legend=1 00:28:38.979 --rc geninfo_all_blocks=1 00:28:38.979 --rc geninfo_unexecuted_blocks=1 00:28:38.979 00:28:38.979 ' 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:38.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.979 --rc genhtml_branch_coverage=1 00:28:38.979 --rc genhtml_function_coverage=1 00:28:38.979 --rc genhtml_legend=1 00:28:38.979 --rc geninfo_all_blocks=1 00:28:38.979 --rc geninfo_unexecuted_blocks=1 00:28:38.979 00:28:38.979 ' 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:38.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.979 --rc genhtml_branch_coverage=1 00:28:38.979 --rc genhtml_function_coverage=1 00:28:38.979 --rc genhtml_legend=1 00:28:38.979 --rc geninfo_all_blocks=1 00:28:38.979 --rc geninfo_unexecuted_blocks=1 00:28:38.979 00:28:38.979 ' 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:38.979 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:28:38.980 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:41.526 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:41.527 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:41.527 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:41.527 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:41.527 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:41.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:28:41.527 00:28:41.527 --- 10.0.0.2 ping statistics --- 00:28:41.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.527 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:28:41.527 00:28:41.527 --- 10.0.0.1 ping statistics --- 00:28:41.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.527 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2534976 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2534976 00:28:41.527 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2534976 ']' 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:41.528 [2024-12-10 04:16:35.551614] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:41.528 [2024-12-10 04:16:35.552677] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:28:41.528 [2024-12-10 04:16:35.552755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.528 [2024-12-10 04:16:35.629837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:41.528 [2024-12-10 04:16:35.691700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.528 [2024-12-10 04:16:35.691753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.528 [2024-12-10 04:16:35.691781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.528 [2024-12-10 04:16:35.691792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.528 [2024-12-10 04:16:35.691802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.528 [2024-12-10 04:16:35.693394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.528 [2024-12-10 04:16:35.693459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:41.528 [2024-12-10 04:16:35.693527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:41.528 [2024-12-10 04:16:35.693530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.528 [2024-12-10 04:16:35.694057] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:41.528 [2024-12-10 04:16:35.870853] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:41.528 [2024-12-10 04:16:35.871062] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:41.528 [2024-12-10 04:16:35.871953] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:41.528 [2024-12-10 04:16:35.872741] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:41.528 [2024-12-10 04:16:35.878249] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.528 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:41.787 Malloc0 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:41.787 [2024-12-10 04:16:35.938413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2535121 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2535122 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2535125 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:28:41.787 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.787 { 00:28:41.787 "params": { 00:28:41.787 "name": "Nvme$subsystem", 00:28:41.787 "trtype": "$TEST_TRANSPORT", 00:28:41.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.787 "adrfam": "ipv4", 00:28:41.788 "trsvcid": "$NVMF_PORT", 00:28:41.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.788 "hdgst": ${hdgst:-false}, 00:28:41.788 "ddgst": ${ddgst:-false} 00:28:41.788 }, 00:28:41.788 "method": "bdev_nvme_attach_controller" 00:28:41.788 } 00:28:41.788 EOF 00:28:41.788 )") 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2535127 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.788 { 00:28:41.788 "params": { 00:28:41.788 "name": "Nvme$subsystem", 00:28:41.788 "trtype": "$TEST_TRANSPORT", 00:28:41.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.788 "adrfam": "ipv4", 00:28:41.788 "trsvcid": "$NVMF_PORT", 00:28:41.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.788 "hdgst": ${hdgst:-false}, 00:28:41.788 "ddgst": ${ddgst:-false} 00:28:41.788 }, 00:28:41.788 "method": "bdev_nvme_attach_controller" 00:28:41.788 } 00:28:41.788 EOF 00:28:41.788 )") 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.788 { 00:28:41.788 "params": { 00:28:41.788 "name": "Nvme$subsystem", 00:28:41.788 "trtype": "$TEST_TRANSPORT", 00:28:41.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.788 "adrfam": "ipv4", 00:28:41.788 "trsvcid": "$NVMF_PORT", 00:28:41.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.788 "hdgst": ${hdgst:-false}, 00:28:41.788 "ddgst": ${ddgst:-false} 00:28:41.788 }, 00:28:41.788 "method": "bdev_nvme_attach_controller" 00:28:41.788 } 00:28:41.788 EOF 00:28:41.788 )") 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.788 { 00:28:41.788 "params": { 00:28:41.788 "name": "Nvme$subsystem", 00:28:41.788 "trtype": "$TEST_TRANSPORT", 00:28:41.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.788 "adrfam": "ipv4", 00:28:41.788 "trsvcid": "$NVMF_PORT", 00:28:41.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.788 "hdgst": ${hdgst:-false}, 00:28:41.788 "ddgst": ${ddgst:-false} 00:28:41.788 }, 00:28:41.788 "method": "bdev_nvme_attach_controller" 00:28:41.788 } 00:28:41.788 EOF 00:28:41.788 )") 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2535121 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:41.788 "params": { 00:28:41.788 "name": "Nvme1", 00:28:41.788 "trtype": "tcp", 00:28:41.788 "traddr": "10.0.0.2", 00:28:41.788 "adrfam": "ipv4", 00:28:41.788 "trsvcid": "4420", 00:28:41.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:41.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:41.788 "hdgst": false, 00:28:41.788 "ddgst": false 00:28:41.788 }, 00:28:41.788 "method": "bdev_nvme_attach_controller" 00:28:41.788 }' 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:41.788 "params": { 00:28:41.788 "name": "Nvme1", 00:28:41.788 "trtype": "tcp", 00:28:41.788 "traddr": "10.0.0.2", 00:28:41.788 "adrfam": "ipv4", 00:28:41.788 "trsvcid": "4420", 00:28:41.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:41.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:41.788 "hdgst": false, 00:28:41.788 "ddgst": false 00:28:41.788 }, 00:28:41.788 "method": "bdev_nvme_attach_controller" 00:28:41.788 }' 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:41.788 "params": { 00:28:41.788 "name": "Nvme1", 00:28:41.788 "trtype": "tcp", 00:28:41.788 "traddr": "10.0.0.2", 00:28:41.788 "adrfam": "ipv4", 00:28:41.788 "trsvcid": "4420", 00:28:41.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:41.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:41.788 "hdgst": false, 00:28:41.788 "ddgst": false 00:28:41.788 }, 00:28:41.788 "method": "bdev_nvme_attach_controller" 00:28:41.788 }' 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:41.788 04:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:41.788 "params": { 00:28:41.788 "name": "Nvme1", 00:28:41.788 "trtype": "tcp", 00:28:41.788 "traddr": "10.0.0.2", 00:28:41.788 "adrfam": "ipv4", 00:28:41.788 "trsvcid": "4420", 00:28:41.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:41.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:41.788 "hdgst": false, 00:28:41.788 "ddgst": false 00:28:41.788 }, 00:28:41.788 "method": "bdev_nvme_attach_controller" 00:28:41.788 }' 00:28:41.788 [2024-12-10 04:16:35.990871] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:28:41.788 [2024-12-10 04:16:35.990873] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:28:41.788 [2024-12-10 04:16:35.990871] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:28:41.788 [2024-12-10 04:16:35.990871] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:28:41.788 [2024-12-10 04:16:35.990960] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 04:16:35.990960] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 04:16:35.990960] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:41.788 [2024-12-10 04:16:35.990973] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:28:41.788 --proc-type=auto ] 00:28:41.788 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:28:42.046 [2024-12-10 04:16:36.172034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.046 [2024-12-10 04:16:36.225177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:42.046 [2024-12-10 04:16:36.271411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.047 [2024-12-10 04:16:36.325957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:42.047 [2024-12-10 04:16:36.370558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.047 [2024-12-10 04:16:36.421126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:42.304 [2024-12-10 04:16:36.437969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.304 [2024-12-10 04:16:36.487631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:42.304 Running I/O for 1 seconds... 00:28:42.304 Running I/O for 1 seconds... 00:28:42.304 Running I/O for 1 seconds... 00:28:42.562 Running I/O for 1 seconds... 00:28:43.501 191680.00 IOPS, 748.75 MiB/s 00:28:43.501 Latency(us) 00:28:43.501 [2024-12-10T03:16:37.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.501 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:28:43.501 Nvme1n1 : 1.00 191290.31 747.23 0.00 0.00 665.44 283.69 2014.63 00:28:43.501 [2024-12-10T03:16:37.890Z] =================================================================================================================== 00:28:43.501 [2024-12-10T03:16:37.890Z] Total : 191290.31 747.23 0.00 0.00 665.44 283.69 2014.63 00:28:43.501 7222.00 IOPS, 28.21 MiB/s 00:28:43.501 Latency(us) 00:28:43.501 [2024-12-10T03:16:37.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.501 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:28:43.501 Nvme1n1 : 1.02 7215.36 28.18 0.00 0.00 17613.40 4126.34 26408.58 00:28:43.501 [2024-12-10T03:16:37.890Z] =================================================================================================================== 00:28:43.501 [2024-12-10T03:16:37.890Z] Total : 7215.36 28.18 0.00 0.00 17613.40 4126.34 26408.58 00:28:43.501 8686.00 IOPS, 33.93 MiB/s 00:28:43.501 Latency(us) 00:28:43.501 [2024-12-10T03:16:37.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.501 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:28:43.501 Nvme1n1 : 1.01 8751.06 34.18 0.00 0.00 14563.28 6019.60 21359.88 00:28:43.501 [2024-12-10T03:16:37.890Z] =================================================================================================================== 00:28:43.501 [2024-12-10T03:16:37.890Z] Total : 8751.06 34.18 0.00 0.00 14563.28 6019.60 21359.88 00:28:43.501 6809.00 IOPS, 26.60 MiB/s 00:28:43.501 Latency(us) 00:28:43.501 [2024-12-10T03:16:37.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.501 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:28:43.501 Nvme1n1 : 1.01 6893.77 26.93 0.00 0.00 18502.15 5558.42 34758.35 00:28:43.501 [2024-12-10T03:16:37.890Z] =================================================================================================================== 00:28:43.501 [2024-12-10T03:16:37.890Z] Total : 6893.77 26.93 0.00 0.00 18502.15 5558.42 34758.35 00:28:43.501 04:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2535122 00:28:43.761 04:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2535125 00:28:43.761 04:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2535127 00:28:43.761 04:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:43.761 04:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.761 04:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:43.761 04:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.761 04:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:28:43.761 04:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:28:43.761 04:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:43.761 04:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:28:43.761 04:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:43.761 04:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:28:43.761 04:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:43.761 04:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:43.761 rmmod nvme_tcp 00:28:43.761 rmmod nvme_fabrics 00:28:43.761 rmmod nvme_keyring 00:28:43.761 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:43.761 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:28:43.761 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:28:43.761 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2534976 ']' 00:28:43.761 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2534976 00:28:43.761 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2534976 ']' 00:28:43.761 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2534976 00:28:43.761 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:28:43.761 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.761 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2534976 00:28:43.761 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:43.761 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:43.761 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2534976' 00:28:43.761 killing process with pid 2534976 00:28:43.761 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2534976 00:28:43.761 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2534976 00:28:44.021 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:44.021 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:44.021 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:44.021 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:28:44.021 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:28:44.021 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:44.021 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:28:44.021 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:44.021 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:44.021 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.021 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.021 04:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.929 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:45.929 00:28:45.929 real 0m7.151s 00:28:45.929 user 0m14.295s 00:28:45.929 sys 0m3.954s 00:28:45.929 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.929 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:45.929 ************************************ 00:28:45.929 END TEST nvmf_bdev_io_wait 00:28:45.929 ************************************ 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:46.188 ************************************ 00:28:46.188 START TEST nvmf_queue_depth 00:28:46.188 ************************************ 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:46.188 * Looking for test storage... 00:28:46.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:46.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.188 --rc genhtml_branch_coverage=1 00:28:46.188 --rc genhtml_function_coverage=1 00:28:46.188 --rc genhtml_legend=1 00:28:46.188 --rc geninfo_all_blocks=1 00:28:46.188 --rc geninfo_unexecuted_blocks=1 00:28:46.188 00:28:46.188 ' 00:28:46.188 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:46.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.188 --rc genhtml_branch_coverage=1 00:28:46.188 --rc genhtml_function_coverage=1 00:28:46.189 --rc genhtml_legend=1 00:28:46.189 --rc geninfo_all_blocks=1 00:28:46.189 --rc geninfo_unexecuted_blocks=1 00:28:46.189 00:28:46.189 ' 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:46.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.189 --rc genhtml_branch_coverage=1 00:28:46.189 --rc genhtml_function_coverage=1 00:28:46.189 --rc genhtml_legend=1 00:28:46.189 --rc geninfo_all_blocks=1 00:28:46.189 --rc geninfo_unexecuted_blocks=1 00:28:46.189 00:28:46.189 ' 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:46.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.189 --rc genhtml_branch_coverage=1 00:28:46.189 --rc genhtml_function_coverage=1 00:28:46.189 --rc genhtml_legend=1 00:28:46.189 --rc geninfo_all_blocks=1 00:28:46.189 --rc geninfo_unexecuted_blocks=1 00:28:46.189 00:28:46.189 ' 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:28:46.189 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:48.720 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:48.721 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:48.721 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:48.721 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:48.721 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:48.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:28:48.721 00:28:48.721 --- 10.0.0.2 ping statistics --- 00:28:48.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.721 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:28:48.721 00:28:48.721 --- 10.0.0.1 ping statistics --- 00:28:48.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.721 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2537348 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2537348 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2537348 ']' 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.721 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:48.721 [2024-12-10 04:16:42.876381] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:48.722 [2024-12-10 04:16:42.877431] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:28:48.722 [2024-12-10 04:16:42.877483] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.722 [2024-12-10 04:16:42.951890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.722 [2024-12-10 04:16:43.007515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.722 [2024-12-10 04:16:43.007593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.722 [2024-12-10 04:16:43.007608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.722 [2024-12-10 04:16:43.007633] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.722 [2024-12-10 04:16:43.007643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.722 [2024-12-10 04:16:43.008242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.722 [2024-12-10 04:16:43.094721] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:48.722 [2024-12-10 04:16:43.095011] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:48.980 [2024-12-10 04:16:43.144882] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:48.980 Malloc0 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:48.980 [2024-12-10 04:16:43.205053] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2537368 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2537368 /var/tmp/bdevperf.sock 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2537368 ']' 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:48.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.980 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:48.980 [2024-12-10 04:16:43.251648] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:28:48.980 [2024-12-10 04:16:43.251725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2537368 ] 00:28:48.980 [2024-12-10 04:16:43.316865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.240 [2024-12-10 04:16:43.373352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.240 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.240 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:28:49.240 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:49.240 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.240 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:49.500 NVMe0n1 00:28:49.500 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.500 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:49.500 Running I/O for 10 seconds... 00:28:51.834 8472.00 IOPS, 33.09 MiB/s [2024-12-10T03:16:47.158Z] 8704.00 IOPS, 34.00 MiB/s [2024-12-10T03:16:48.125Z] 8683.67 IOPS, 33.92 MiB/s [2024-12-10T03:16:49.080Z] 8704.50 IOPS, 34.00 MiB/s [2024-12-10T03:16:50.016Z] 8769.60 IOPS, 34.26 MiB/s [2024-12-10T03:16:50.955Z] 8730.17 IOPS, 34.10 MiB/s [2024-12-10T03:16:52.338Z] 8770.14 IOPS, 34.26 MiB/s [2024-12-10T03:16:53.272Z] 8758.25 IOPS, 34.21 MiB/s [2024-12-10T03:16:54.212Z] 8764.11 IOPS, 34.23 MiB/s [2024-12-10T03:16:54.212Z] 8788.10 IOPS, 34.33 MiB/s 00:28:59.823 Latency(us) 00:28:59.823 [2024-12-10T03:16:54.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.823 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:28:59.823 Verification LBA range: start 0x0 length 0x4000 00:28:59.823 NVMe0n1 : 10.10 8802.49 34.38 0.00 0.00 115804.96 21651.15 68739.98 00:28:59.823 [2024-12-10T03:16:54.212Z] =================================================================================================================== 00:28:59.823 [2024-12-10T03:16:54.212Z] Total : 8802.49 34.38 0.00 0.00 115804.96 21651.15 68739.98 00:28:59.823 { 00:28:59.823 "results": [ 00:28:59.823 { 00:28:59.823 "job": "NVMe0n1", 00:28:59.823 "core_mask": "0x1", 00:28:59.823 "workload": "verify", 00:28:59.823 "status": "finished", 00:28:59.823 "verify_range": { 00:28:59.823 "start": 0, 00:28:59.823 "length": 16384 00:28:59.823 }, 00:28:59.823 "queue_depth": 1024, 00:28:59.823 "io_size": 4096, 00:28:59.823 "runtime": 10.096353, 00:28:59.823 "iops": 8802.4854123068, 00:28:59.823 "mibps": 34.38470864182344, 00:28:59.823 "io_failed": 0, 00:28:59.823 "io_timeout": 0, 00:28:59.823 "avg_latency_us": 115804.96340760912, 00:28:59.823 "min_latency_us": 21651.152592592593, 00:28:59.823 "max_latency_us": 68739.98222222223 00:28:59.823 } 00:28:59.823 ], 00:28:59.823 "core_count": 1 00:28:59.823 } 00:28:59.823 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2537368 00:28:59.823 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2537368 ']' 00:28:59.823 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2537368 00:28:59.823 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:28:59.823 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.823 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2537368 00:28:59.823 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:59.823 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:59.823 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2537368' 00:28:59.823 killing process with pid 2537368 00:28:59.823 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2537368 00:28:59.823 Received shutdown signal, test time was about 10.000000 seconds 00:28:59.823 00:28:59.823 Latency(us) 00:28:59.823 [2024-12-10T03:16:54.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.823 [2024-12-10T03:16:54.212Z] =================================================================================================================== 00:28:59.823 [2024-12-10T03:16:54.212Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:59.823 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2537368 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:00.084 rmmod nvme_tcp 00:29:00.084 rmmod nvme_fabrics 00:29:00.084 rmmod nvme_keyring 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2537348 ']' 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2537348 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2537348 ']' 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2537348 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2537348 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2537348' 00:29:00.084 killing process with pid 2537348 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2537348 00:29:00.084 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2537348 00:29:00.343 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:00.343 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:00.343 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:00.344 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:29:00.344 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:29:00.344 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:00.344 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:29:00.344 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:00.344 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:00.344 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.344 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.344 04:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:02.889 00:29:02.889 real 0m16.329s 00:29:02.889 user 0m22.580s 00:29:02.889 sys 0m3.420s 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:02.889 ************************************ 00:29:02.889 END TEST nvmf_queue_depth 00:29:02.889 ************************************ 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:02.889 ************************************ 00:29:02.889 START TEST nvmf_target_multipath 00:29:02.889 ************************************ 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:02.889 * Looking for test storage... 00:29:02.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:02.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.889 --rc genhtml_branch_coverage=1 00:29:02.889 --rc genhtml_function_coverage=1 00:29:02.889 --rc genhtml_legend=1 00:29:02.889 --rc geninfo_all_blocks=1 00:29:02.889 --rc geninfo_unexecuted_blocks=1 00:29:02.889 00:29:02.889 ' 00:29:02.889 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:02.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.890 --rc genhtml_branch_coverage=1 00:29:02.890 --rc genhtml_function_coverage=1 00:29:02.890 --rc genhtml_legend=1 00:29:02.890 --rc geninfo_all_blocks=1 00:29:02.890 --rc geninfo_unexecuted_blocks=1 00:29:02.890 00:29:02.890 ' 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:02.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.890 --rc genhtml_branch_coverage=1 00:29:02.890 --rc genhtml_function_coverage=1 00:29:02.890 --rc genhtml_legend=1 00:29:02.890 --rc geninfo_all_blocks=1 00:29:02.890 --rc geninfo_unexecuted_blocks=1 00:29:02.890 00:29:02.890 ' 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:02.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.890 --rc genhtml_branch_coverage=1 00:29:02.890 --rc genhtml_function_coverage=1 00:29:02.890 --rc genhtml_legend=1 00:29:02.890 --rc geninfo_all_blocks=1 00:29:02.890 --rc geninfo_unexecuted_blocks=1 00:29:02.890 00:29:02.890 ' 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:29:02.890 04:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:04.794 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.794 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:29:04.794 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:04.794 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:04.794 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:04.794 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:04.794 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:04.794 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:29:04.794 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:04.794 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:29:04.794 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:29:04.794 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:29:04.794 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:29:04.794 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:29:04.794 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:29:04.794 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:04.795 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:04.795 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:04.795 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:04.795 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:04.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:29:04.795 00:29:04.795 --- 10.0.0.2 ping statistics --- 00:29:04.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.795 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:29:04.795 00:29:04.795 --- 10.0.0.1 ping statistics --- 00:29:04.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.795 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:04.795 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:04.796 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.796 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:04.796 04:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:29:04.796 only one NIC for nvmf test 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:04.796 rmmod nvme_tcp 00:29:04.796 rmmod nvme_fabrics 00:29:04.796 rmmod nvme_keyring 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.796 04:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.332 00:29:07.332 real 0m4.416s 00:29:07.332 user 0m0.865s 00:29:07.332 sys 0m1.532s 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:07.332 ************************************ 00:29:07.332 END TEST nvmf_target_multipath 00:29:07.332 ************************************ 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:07.332 ************************************ 00:29:07.332 START TEST nvmf_zcopy 00:29:07.332 ************************************ 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:07.332 * Looking for test storage... 00:29:07.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:29:07.332 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:07.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.333 --rc genhtml_branch_coverage=1 00:29:07.333 --rc genhtml_function_coverage=1 00:29:07.333 --rc genhtml_legend=1 00:29:07.333 --rc geninfo_all_blocks=1 00:29:07.333 --rc geninfo_unexecuted_blocks=1 00:29:07.333 00:29:07.333 ' 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:07.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.333 --rc genhtml_branch_coverage=1 00:29:07.333 --rc genhtml_function_coverage=1 00:29:07.333 --rc genhtml_legend=1 00:29:07.333 --rc geninfo_all_blocks=1 00:29:07.333 --rc geninfo_unexecuted_blocks=1 00:29:07.333 00:29:07.333 ' 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:07.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.333 --rc genhtml_branch_coverage=1 00:29:07.333 --rc genhtml_function_coverage=1 00:29:07.333 --rc genhtml_legend=1 00:29:07.333 --rc geninfo_all_blocks=1 00:29:07.333 --rc geninfo_unexecuted_blocks=1 00:29:07.333 00:29:07.333 ' 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:07.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.333 --rc genhtml_branch_coverage=1 00:29:07.333 --rc genhtml_function_coverage=1 00:29:07.333 --rc genhtml_legend=1 00:29:07.333 --rc geninfo_all_blocks=1 00:29:07.333 --rc geninfo_unexecuted_blocks=1 00:29:07.333 00:29:07.333 ' 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.333 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:09.236 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.236 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:29:09.236 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:09.236 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:09.236 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:09.236 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:09.237 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:09.237 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:09.237 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:09.237 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:09.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:29:09.237 00:29:09.237 --- 10.0.0.2 ping statistics --- 00:29:09.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.237 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:29:09.237 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:29:09.237 00:29:09.237 --- 10.0.0.1 ping statistics --- 00:29:09.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.237 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2542548 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2542548 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2542548 ']' 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.238 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:09.498 [2024-12-10 04:17:03.648403] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:09.498 [2024-12-10 04:17:03.649468] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:29:09.498 [2024-12-10 04:17:03.649524] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.498 [2024-12-10 04:17:03.721177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.498 [2024-12-10 04:17:03.779574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.498 [2024-12-10 04:17:03.779653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.498 [2024-12-10 04:17:03.779667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.498 [2024-12-10 04:17:03.779692] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.498 [2024-12-10 04:17:03.779702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.498 [2024-12-10 04:17:03.780367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.498 [2024-12-10 04:17:03.877939] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:09.498 [2024-12-10 04:17:03.878235] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:09.759 [2024-12-10 04:17:03.929024] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:09.759 [2024-12-10 04:17:03.945198] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:09.759 malloc0 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:09.759 { 00:29:09.759 "params": { 00:29:09.759 "name": "Nvme$subsystem", 00:29:09.759 "trtype": "$TEST_TRANSPORT", 00:29:09.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.759 "adrfam": "ipv4", 00:29:09.759 "trsvcid": "$NVMF_PORT", 00:29:09.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.759 "hdgst": ${hdgst:-false}, 00:29:09.759 "ddgst": ${ddgst:-false} 00:29:09.759 }, 00:29:09.759 "method": "bdev_nvme_attach_controller" 00:29:09.759 } 00:29:09.759 EOF 00:29:09.759 )") 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:09.759 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:09.759 "params": { 00:29:09.759 "name": "Nvme1", 00:29:09.759 "trtype": "tcp", 00:29:09.759 "traddr": "10.0.0.2", 00:29:09.759 "adrfam": "ipv4", 00:29:09.759 "trsvcid": "4420", 00:29:09.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:09.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:09.760 "hdgst": false, 00:29:09.760 "ddgst": false 00:29:09.760 }, 00:29:09.760 "method": "bdev_nvme_attach_controller" 00:29:09.760 }' 00:29:09.760 [2024-12-10 04:17:04.032233] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:29:09.760 [2024-12-10 04:17:04.032318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2542576 ] 00:29:09.760 [2024-12-10 04:17:04.104923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.018 [2024-12-10 04:17:04.164493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.018 Running I/O for 10 seconds... 00:29:12.339 5710.00 IOPS, 44.61 MiB/s [2024-12-10T03:17:07.662Z] 5739.00 IOPS, 44.84 MiB/s [2024-12-10T03:17:08.601Z] 5766.67 IOPS, 45.05 MiB/s [2024-12-10T03:17:09.541Z] 5765.25 IOPS, 45.04 MiB/s [2024-12-10T03:17:10.481Z] 5768.40 IOPS, 45.07 MiB/s [2024-12-10T03:17:11.421Z] 5773.00 IOPS, 45.10 MiB/s [2024-12-10T03:17:12.803Z] 5775.57 IOPS, 45.12 MiB/s [2024-12-10T03:17:13.738Z] 5774.88 IOPS, 45.12 MiB/s [2024-12-10T03:17:14.674Z] 5775.89 IOPS, 45.12 MiB/s [2024-12-10T03:17:14.674Z] 5774.90 IOPS, 45.12 MiB/s 00:29:20.285 Latency(us) 00:29:20.285 [2024-12-10T03:17:14.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.285 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:29:20.285 Verification LBA range: start 0x0 length 0x1000 00:29:20.285 Nvme1n1 : 10.01 5777.47 45.14 0.00 0.00 22094.57 297.34 29515.47 00:29:20.285 [2024-12-10T03:17:14.674Z] =================================================================================================================== 00:29:20.285 [2024-12-10T03:17:14.674Z] Total : 5777.47 45.14 0.00 0.00 22094.57 297.34 29515.47 00:29:20.285 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2543760 00:29:20.285 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:29:20.285 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:29:20.285 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:20.286 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:29:20.286 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:20.286 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:20.286 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.286 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.286 { 00:29:20.286 "params": { 00:29:20.286 "name": "Nvme$subsystem", 00:29:20.286 "trtype": "$TEST_TRANSPORT", 00:29:20.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.286 "adrfam": "ipv4", 00:29:20.286 "trsvcid": "$NVMF_PORT", 00:29:20.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.286 "hdgst": ${hdgst:-false}, 00:29:20.286 "ddgst": ${ddgst:-false} 00:29:20.286 }, 00:29:20.286 "method": "bdev_nvme_attach_controller" 00:29:20.286 } 00:29:20.286 EOF 00:29:20.286 )") 00:29:20.286 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:20.286 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:20.286 [2024-12-10 04:17:14.661004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.286 [2024-12-10 04:17:14.661043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.286 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:20.286 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:20.286 "params": { 00:29:20.286 "name": "Nvme1", 00:29:20.286 "trtype": "tcp", 00:29:20.286 "traddr": "10.0.0.2", 00:29:20.286 "adrfam": "ipv4", 00:29:20.286 "trsvcid": "4420", 00:29:20.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:20.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:20.286 "hdgst": false, 00:29:20.286 "ddgst": false 00:29:20.286 }, 00:29:20.286 "method": "bdev_nvme_attach_controller" 00:29:20.286 }' 00:29:20.545 [2024-12-10 04:17:14.668923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.668947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.676922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.676947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.684917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.684939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.692934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.692957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.698958] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:29:20.545 [2024-12-10 04:17:14.699028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543760 ] 00:29:20.545 [2024-12-10 04:17:14.700919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.700943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.708919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.708943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.716899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.716922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.724914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.724938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.732916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.732939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.740899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.740921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.748900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.748936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.756913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.756936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.764901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.764924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.768158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.545 [2024-12-10 04:17:14.772893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.772932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.780963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.781003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.788917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.788942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.796919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.796942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.804899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.804936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.812898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.812921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.820889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.820927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.828601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.545 [2024-12-10 04:17:14.828900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.828923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.545 [2024-12-10 04:17:14.836914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.545 [2024-12-10 04:17:14.836937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.546 [2024-12-10 04:17:14.844947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.546 [2024-12-10 04:17:14.844982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.546 [2024-12-10 04:17:14.852940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.546 [2024-12-10 04:17:14.852982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.546 [2024-12-10 04:17:14.860960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.546 [2024-12-10 04:17:14.861002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.546 [2024-12-10 04:17:14.868955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.546 [2024-12-10 04:17:14.868996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.546 [2024-12-10 04:17:14.876985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.546 [2024-12-10 04:17:14.877028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.546 [2024-12-10 04:17:14.884961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.546 [2024-12-10 04:17:14.885001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.546 [2024-12-10 04:17:14.892924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.546 [2024-12-10 04:17:14.892950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.546 [2024-12-10 04:17:14.900961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.546 [2024-12-10 04:17:14.901001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.546 [2024-12-10 04:17:14.908951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.546 [2024-12-10 04:17:14.908993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.546 [2024-12-10 04:17:14.916939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.546 [2024-12-10 04:17:14.916978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.546 [2024-12-10 04:17:14.924926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.546 [2024-12-10 04:17:14.924950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.808 [2024-12-10 04:17:14.932921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:14.932944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:14.940914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:14.940937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:14.948913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:14.948937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:14.956914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:14.956938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:14.964914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:14.964937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:14.972901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:14.972925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:14.980925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:14.980948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:14.988912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:14.988935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:14.996897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:14.996920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.004913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.004936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.012899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.012922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.020914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.020937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.028909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.028948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.036914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.036936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.044899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.044922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.052918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.052941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.060904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.060928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.068922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.068949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.076915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.076938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.084914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.084939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.092941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.092964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 Running I/O for 5 seconds... 00:29:20.809 [2024-12-10 04:17:15.109486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.109513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.125131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.125172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.135066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.135090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.151129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.151155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.167523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.167576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:20.809 [2024-12-10 04:17:15.178000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:20.809 [2024-12-10 04:17:15.178040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.194480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.194505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.211245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.211281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.226267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.226295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.244819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.244847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.255495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.255520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.269347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.269373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.289029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.289055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.299445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.299472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.313461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.313488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.323586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.323614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.339682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.339710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.349768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.349794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.366229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.366270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.384857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.384883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.395579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.395621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.407731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.407757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.419177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.419204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.433321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.433348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.080 [2024-12-10 04:17:15.453758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.080 [2024-12-10 04:17:15.453786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.469756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.469784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.479117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.479144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.493379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.493406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.503634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.503661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.518478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.518504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.533071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.533099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.553349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.553375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.569304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.569330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.579478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.579505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.594376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.594416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.612912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.612952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.623200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.623227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.637142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.637169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.647172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.647212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.663320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.663346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.678936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.678964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.695383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.695425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.341 [2024-12-10 04:17:15.710114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.341 [2024-12-10 04:17:15.710141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.729235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.729262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.739640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.739667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.755216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.755242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.770565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.770593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.788952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.788979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.800272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.800297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.811770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.811798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.822965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.822993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.839563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.839591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.849731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.849760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.866613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.866641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.882823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.882852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.898923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.898950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.914635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.914663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.931148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.931186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.946325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.946354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.964742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.964769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.601 [2024-12-10 04:17:15.974934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.601 [2024-12-10 04:17:15.974961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 [2024-12-10 04:17:15.989768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:15.989797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 [2024-12-10 04:17:16.008616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:16.008643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 [2024-12-10 04:17:16.018861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:16.018888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 [2024-12-10 04:17:16.033090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:16.033133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 [2024-12-10 04:17:16.053668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:16.053695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 [2024-12-10 04:17:16.070570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:16.070603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 [2024-12-10 04:17:16.088758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:16.088785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 11262.00 IOPS, 87.98 MiB/s [2024-12-10T03:17:16.250Z] [2024-12-10 04:17:16.099658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:16.099685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 [2024-12-10 04:17:16.110809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:16.110836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 [2024-12-10 04:17:16.126922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:16.126962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 [2024-12-10 04:17:16.143298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:16.143323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 [2024-12-10 04:17:16.158960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:16.159002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 [2024-12-10 04:17:16.174894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:16.174922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 [2024-12-10 04:17:16.192805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:16.192833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 [2024-12-10 04:17:16.203929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:16.203957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 [2024-12-10 04:17:16.219073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:16.219108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:21.861 [2024-12-10 04:17:16.235316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:21.861 [2024-12-10 04:17:16.235356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.250519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.250572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.267248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.267274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.283395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.283421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.293939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.293964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.310101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.310126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.320878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.320918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.333485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.333511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.349629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.349656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.359710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.359737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.375793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.375835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.386075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.386100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.402374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.402399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.420459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.420486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.430643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.430670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.445127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.445152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.454859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.454901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.469400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.469425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.479385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.479409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.493468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.493492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.121 [2024-12-10 04:17:16.503185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.121 [2024-12-10 04:17:16.503210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.517731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.517758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.527726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.527753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.542434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.542460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.559492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.559518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.574741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.574768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.592576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.592602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.602597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.602625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.619398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.619425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.632246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.632273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.644042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.644069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.656199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.656226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.666223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.666249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.682639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.682667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.699294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.699333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.714159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.714186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.724532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.724567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.739832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.739872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.750858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.750883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.382 [2024-12-10 04:17:16.764563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.382 [2024-12-10 04:17:16.764590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:16.774678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:16.774706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:16.790726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:16.790752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:16.808602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:16.808628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:16.819257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:16.819282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:16.833211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:16.833236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:16.852923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:16.852961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:16.864124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:16.864149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:16.878900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:16.878925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:16.895097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:16.895123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:16.910624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:16.910652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:16.928781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:16.928809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:16.940297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:16.940322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:16.951620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:16.951648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:16.966146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:16.966187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:16.985897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:16.985939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:17.002328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:17.002368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.642 [2024-12-10 04:17:17.020843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.642 [2024-12-10 04:17:17.020870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.031644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.031671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.046200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.046225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.064702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.064729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.075238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.075264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.091906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.091931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 11273.50 IOPS, 88.07 MiB/s [2024-12-10T03:17:17.291Z] [2024-12-10 04:17:17.102304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.102343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.117566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.117615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.137120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.137145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.147631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.147659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.162404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.162429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.180629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.180657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.190622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.190650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.207075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.207101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.223469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.223495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.238260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.238302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.256716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.256744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.266761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.266788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:22.902 [2024-12-10 04:17:17.282935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:22.902 [2024-12-10 04:17:17.282971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.298653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.298681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.316768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.316798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.327110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.327136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.342870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.342910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.359008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.359035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.374077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.374104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.384050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.384076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.399165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.399190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.414204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.414247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.424818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.424861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.437173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.437200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.448420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.448445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.459473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.459498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.473321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.473347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.483596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.483623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.499795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.499840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.510117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.510142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.526007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.526046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.161 [2024-12-10 04:17:17.536125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.161 [2024-12-10 04:17:17.536160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.548611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.548638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.560254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.560279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.572100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.572124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.583570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.583597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.597145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.597171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.607429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.607456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.622571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.622598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.638804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.638831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.654998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.655038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.670268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.670295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.686707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.686734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.705296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.705322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.716399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.716423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.727672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.727699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.742946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.742972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.758892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.758919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.775015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.775040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.791176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.791215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.420 [2024-12-10 04:17:17.802055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.420 [2024-12-10 04:17:17.802088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:17.818515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:17.818541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:17.835124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:17.835148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:17.851105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:17.851131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:17.866398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:17.866424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:17.882574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:17.882602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:17.899049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:17.899074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:17.914950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:17.914991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:17.930591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:17.930619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:17.946690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:17.946732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:17.962950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:17.962995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:17.978642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:17.978670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:17.996857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:17.996908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:18.008386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:18.008410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:18.019580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:18.019607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:18.030985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:18.031010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:18.047933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:18.047960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.679 [2024-12-10 04:17:18.060208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.679 [2024-12-10 04:17:18.060235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.070246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.070272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.085607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.085642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 11271.00 IOPS, 88.05 MiB/s [2024-12-10T03:17:18.327Z] [2024-12-10 04:17:18.104983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.105008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.116165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.116192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.127759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.127786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.139419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.139445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.152777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.152807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.163169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.163196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.178092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.178128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.197734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.197762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.216657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.216684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.226679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.226707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.243023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.243048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.261013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.261040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.271399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.271424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.285402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.285429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.295715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.295744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:23.938 [2024-12-10 04:17:18.310220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:23.938 [2024-12-10 04:17:18.310246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.198 [2024-12-10 04:17:18.326678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.198 [2024-12-10 04:17:18.326706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.198 [2024-12-10 04:17:18.343065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.198 [2024-12-10 04:17:18.343091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.198 [2024-12-10 04:17:18.358238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.198 [2024-12-10 04:17:18.358264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.198 [2024-12-10 04:17:18.374993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.198 [2024-12-10 04:17:18.375019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.198 [2024-12-10 04:17:18.390460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.198 [2024-12-10 04:17:18.390499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.198 [2024-12-10 04:17:18.407201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.198 [2024-12-10 04:17:18.407241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.198 [2024-12-10 04:17:18.423203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.198 [2024-12-10 04:17:18.423244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.198 [2024-12-10 04:17:18.439230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.198 [2024-12-10 04:17:18.439257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.198 [2024-12-10 04:17:18.453495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.198 [2024-12-10 04:17:18.453521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.198 [2024-12-10 04:17:18.463482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.198 [2024-12-10 04:17:18.463508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.198 [2024-12-10 04:17:18.477935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.198 [2024-12-10 04:17:18.477959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.198 [2024-12-10 04:17:18.488297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.198 [2024-12-10 04:17:18.488337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.198 [2024-12-10 04:17:18.502860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.198 [2024-12-10 04:17:18.502899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.198 [2024-12-10 04:17:18.519319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.198 [2024-12-10 04:17:18.519345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.198 [2024-12-10 04:17:18.529749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.199 [2024-12-10 04:17:18.529776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.199 [2024-12-10 04:17:18.545834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.199 [2024-12-10 04:17:18.545876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.199 [2024-12-10 04:17:18.565553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.199 [2024-12-10 04:17:18.565580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.583621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.583663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.598200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.598227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.617485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.617510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.637435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.637461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.653729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.653755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.664052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.664077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.676283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.676307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.688143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.688168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.699600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.699627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.711067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.711092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.726686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.726713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.743088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.743115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.758204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.758229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.774737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.774763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.793168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.793194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.804232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.804257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.815645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.815673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.827289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.827315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.459 [2024-12-10 04:17:18.839053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.459 [2024-12-10 04:17:18.839077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:18.852836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.720 [2024-12-10 04:17:18.852879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:18.863095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.720 [2024-12-10 04:17:18.863120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:18.877879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.720 [2024-12-10 04:17:18.877904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:18.888403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.720 [2024-12-10 04:17:18.888453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:18.900404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.720 [2024-12-10 04:17:18.900429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:18.911487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.720 [2024-12-10 04:17:18.911512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:18.925002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.720 [2024-12-10 04:17:18.925029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:18.934747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.720 [2024-12-10 04:17:18.934775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:18.949458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.720 [2024-12-10 04:17:18.949483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:18.969922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.720 [2024-12-10 04:17:18.969948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:18.986539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.720 [2024-12-10 04:17:18.986574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:19.002951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.720 [2024-12-10 04:17:19.002992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:19.018826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.720 [2024-12-10 04:17:19.018854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:19.034753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.720 [2024-12-10 04:17:19.034781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:19.050775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.720 [2024-12-10 04:17:19.050803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:19.067312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.720 [2024-12-10 04:17:19.067338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.720 [2024-12-10 04:17:19.082146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.721 [2024-12-10 04:17:19.082172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.721 [2024-12-10 04:17:19.092061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.721 [2024-12-10 04:17:19.092088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.104772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.104802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 11259.25 IOPS, 87.96 MiB/s [2024-12-10T03:17:19.369Z] [2024-12-10 04:17:19.116287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.116313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.127879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.127920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.139708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.139737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.150952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.150991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.165688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.165715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.175951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.175976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.188284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.188309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.199373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.199414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.211057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.211083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.226728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.226756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.242718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.242746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.258938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.258965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.274726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.274754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.291007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.291034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.307201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.307227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.322792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.322818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.338965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.338992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:24.980 [2024-12-10 04:17:19.355005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:24.980 [2024-12-10 04:17:19.355045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.240 [2024-12-10 04:17:19.370927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.240 [2024-12-10 04:17:19.370956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.240 [2024-12-10 04:17:19.388463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.240 [2024-12-10 04:17:19.388488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.240 [2024-12-10 04:17:19.398023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.240 [2024-12-10 04:17:19.398049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.240 [2024-12-10 04:17:19.414261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.240 [2024-12-10 04:17:19.414286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.240 [2024-12-10 04:17:19.432585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.240 [2024-12-10 04:17:19.432622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.240 [2024-12-10 04:17:19.442269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.240 [2024-12-10 04:17:19.442294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.240 [2024-12-10 04:17:19.458703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.240 [2024-12-10 04:17:19.458730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.240 [2024-12-10 04:17:19.474314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.240 [2024-12-10 04:17:19.474341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.240 [2024-12-10 04:17:19.492393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.240 [2024-12-10 04:17:19.492418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.240 [2024-12-10 04:17:19.502192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.240 [2024-12-10 04:17:19.502218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.240 [2024-12-10 04:17:19.518516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.240 [2024-12-10 04:17:19.518567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.240 [2024-12-10 04:17:19.534265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.240 [2024-12-10 04:17:19.534291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.240 [2024-12-10 04:17:19.553148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.240 [2024-12-10 04:17:19.553175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.240 [2024-12-10 04:17:19.564045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.240 [2024-12-10 04:17:19.564072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.240 [2024-12-10 04:17:19.575810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.240 [2024-12-10 04:17:19.575838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.240 [2024-12-10 04:17:19.588845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.240 [2024-12-10 04:17:19.588873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.241 [2024-12-10 04:17:19.599175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.241 [2024-12-10 04:17:19.599202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.241 [2024-12-10 04:17:19.614929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.241 [2024-12-10 04:17:19.614955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.630155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.630183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.640454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.640480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.652684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.652711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.664066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.664091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.675425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.675467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.686803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.686846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.702269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.702294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.712610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.712638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.725077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.725103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.736393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.736418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.748348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.748374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.759397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.759422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.775101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.775127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.791386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.791412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.806454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.806481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.825167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.825207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.835729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.835756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.850381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.850407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.868740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.868787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.501 [2024-12-10 04:17:19.879431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.501 [2024-12-10 04:17:19.879456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:19.894461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:19.894488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:19.910582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:19.910610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:19.928896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:19.928922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:19.939447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:19.939473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:19.953754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:19.953781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:19.964021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:19.964047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:19.976404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:19.976429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:19.987640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:19.987668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:20.002150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:20.002179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:20.021451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:20.021490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:20.040959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:20.040992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:20.052366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:20.052408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:20.064473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:20.064500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:20.075363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:20.075390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:20.089517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:20.089568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:20.098713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:20.098740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 11267.40 IOPS, 88.03 MiB/s [2024-12-10T03:17:20.150Z] [2024-12-10 04:17:20.110422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:20.110447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:20.117779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:20.117805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 00:29:25.761 Latency(us) 00:29:25.761 [2024-12-10T03:17:20.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.761 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:29:25.761 Nvme1n1 : 5.01 11268.10 88.03 0.00 0.00 11344.25 3058.35 19903.53 00:29:25.761 [2024-12-10T03:17:20.150Z] =================================================================================================================== 00:29:25.761 [2024-12-10T03:17:20.150Z] Total : 11268.10 88.03 0.00 0.00 11344.25 3058.35 19903.53 00:29:25.761 [2024-12-10 04:17:20.125057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.761 [2024-12-10 04:17:20.125097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.761 [2024-12-10 04:17:20.132916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.762 [2024-12-10 04:17:20.132950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:25.762 [2024-12-10 04:17:20.140890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:25.762 [2024-12-10 04:17:20.140915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.148980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.149036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.156963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.157013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.164959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.165004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.172961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.173008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.180961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.181007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.188957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.188999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.196955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.197004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.204957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.205005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.212960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.213007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.220962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.221010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.228963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.229012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.236961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.237009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.244955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.244998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.252955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.253003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.260958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.261007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.268948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.268988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.276915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.276936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.284943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.284978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.292897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.292931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.300885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.300907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.308970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.309021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.316961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.317012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.324962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.324999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.332911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.332931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.340897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.340931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 [2024-12-10 04:17:20.348894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:26.020 [2024-12-10 04:17:20.348928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:26.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2543760) - No such process 00:29:26.020 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2543760 00:29:26.021 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.021 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.021 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:26.021 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.021 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:26.021 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.021 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:26.021 delay0 00:29:26.021 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.021 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:29:26.021 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.021 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:26.021 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.021 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:29:26.279 [2024-12-10 04:17:20.433562] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:34.494 Initializing NVMe Controllers 00:29:34.494 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:34.494 Initialization complete. Launching workers. 00:29:34.494 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 224, failed: 26116 00:29:34.494 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26201, failed to submit 139 00:29:34.494 success 26129, unsuccessful 72, failed 0 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:34.494 rmmod nvme_tcp 00:29:34.494 rmmod nvme_fabrics 00:29:34.494 rmmod nvme_keyring 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2542548 ']' 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2542548 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2542548 ']' 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2542548 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2542548 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2542548' 00:29:34.494 killing process with pid 2542548 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2542548 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2542548 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.494 04:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.874 04:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:35.874 00:29:35.874 real 0m28.723s 00:29:35.874 user 0m40.975s 00:29:35.874 sys 0m10.190s 00:29:35.874 04:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.874 04:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:35.874 ************************************ 00:29:35.874 END TEST nvmf_zcopy 00:29:35.874 ************************************ 00:29:35.874 04:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:35.874 04:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:35.874 04:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.874 04:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:35.874 ************************************ 00:29:35.874 START TEST nvmf_nmic 00:29:35.874 ************************************ 00:29:35.874 04:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:35.874 * Looking for test storage... 00:29:35.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:35.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.874 --rc genhtml_branch_coverage=1 00:29:35.874 --rc genhtml_function_coverage=1 00:29:35.874 --rc genhtml_legend=1 00:29:35.874 --rc geninfo_all_blocks=1 00:29:35.874 --rc geninfo_unexecuted_blocks=1 00:29:35.874 00:29:35.874 ' 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:35.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.874 --rc genhtml_branch_coverage=1 00:29:35.874 --rc genhtml_function_coverage=1 00:29:35.874 --rc genhtml_legend=1 00:29:35.874 --rc geninfo_all_blocks=1 00:29:35.874 --rc geninfo_unexecuted_blocks=1 00:29:35.874 00:29:35.874 ' 00:29:35.874 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:35.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.874 --rc genhtml_branch_coverage=1 00:29:35.875 --rc genhtml_function_coverage=1 00:29:35.875 --rc genhtml_legend=1 00:29:35.875 --rc geninfo_all_blocks=1 00:29:35.875 --rc geninfo_unexecuted_blocks=1 00:29:35.875 00:29:35.875 ' 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:35.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.875 --rc genhtml_branch_coverage=1 00:29:35.875 --rc genhtml_function_coverage=1 00:29:35.875 --rc genhtml_legend=1 00:29:35.875 --rc geninfo_all_blocks=1 00:29:35.875 --rc geninfo_unexecuted_blocks=1 00:29:35.875 00:29:35.875 ' 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:29:35.875 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:38.409 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:38.409 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:29:38.409 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:38.409 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:38.409 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:38.409 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:38.409 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:38.409 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:29:38.409 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:38.409 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:29:38.409 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:38.410 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:38.410 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:38.410 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:38.410 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:38.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:38.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:29:38.410 00:29:38.410 --- 10.0.0.2 ping statistics --- 00:29:38.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.410 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:38.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:38.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:29:38.410 00:29:38.410 --- 10.0.0.1 ping statistics --- 00:29:38.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.410 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:38.410 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2547263 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2547263 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2547263 ']' 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:38.411 [2024-12-10 04:17:32.493911] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:38.411 [2024-12-10 04:17:32.494956] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:29:38.411 [2024-12-10 04:17:32.495009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.411 [2024-12-10 04:17:32.568170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:38.411 [2024-12-10 04:17:32.626831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.411 [2024-12-10 04:17:32.626901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.411 [2024-12-10 04:17:32.626915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:38.411 [2024-12-10 04:17:32.626925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:38.411 [2024-12-10 04:17:32.626934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.411 [2024-12-10 04:17:32.628507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.411 [2024-12-10 04:17:32.628639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:38.411 [2024-12-10 04:17:32.628719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:38.411 [2024-12-10 04:17:32.628724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.411 [2024-12-10 04:17:32.715253] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:38.411 [2024-12-10 04:17:32.715390] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:38.411 [2024-12-10 04:17:32.715687] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:38.411 [2024-12-10 04:17:32.716282] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:38.411 [2024-12-10 04:17:32.716489] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:38.411 [2024-12-10 04:17:32.761396] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.411 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:38.676 Malloc0 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:38.676 [2024-12-10 04:17:32.833573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:29:38.676 test case1: single bdev can't be used in multiple subsystems 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.676 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:38.676 [2024-12-10 04:17:32.857323] bdev.c:8511:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:29:38.677 [2024-12-10 04:17:32.857352] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:29:38.677 [2024-12-10 04:17:32.857382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.677 request: 00:29:38.677 { 00:29:38.677 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:29:38.677 "namespace": { 00:29:38.677 "bdev_name": "Malloc0", 00:29:38.677 "no_auto_visible": false, 00:29:38.677 "hide_metadata": false 00:29:38.677 }, 00:29:38.677 "method": "nvmf_subsystem_add_ns", 00:29:38.677 "req_id": 1 00:29:38.677 } 00:29:38.677 Got JSON-RPC error response 00:29:38.677 response: 00:29:38.677 { 00:29:38.677 "code": -32602, 00:29:38.677 "message": "Invalid parameters" 00:29:38.677 } 00:29:38.677 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:38.677 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:29:38.677 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:29:38.677 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:29:38.677 Adding namespace failed - expected result. 00:29:38.677 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:29:38.677 test case2: host connect to nvmf target in multiple paths 00:29:38.677 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:38.677 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.677 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:38.677 [2024-12-10 04:17:32.869435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:38.677 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.677 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:38.935 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:29:38.935 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:29:38.935 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:29:38.935 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:38.935 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:29:38.935 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:29:41.473 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:29:41.473 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:29:41.473 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:29:41.473 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:29:41.473 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:29:41.473 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:29:41.473 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:41.473 [global] 00:29:41.473 thread=1 00:29:41.473 invalidate=1 00:29:41.473 rw=write 00:29:41.473 time_based=1 00:29:41.473 runtime=1 00:29:41.473 ioengine=libaio 00:29:41.473 direct=1 00:29:41.473 bs=4096 00:29:41.474 iodepth=1 00:29:41.474 norandommap=0 00:29:41.474 numjobs=1 00:29:41.474 00:29:41.474 verify_dump=1 00:29:41.474 verify_backlog=512 00:29:41.474 verify_state_save=0 00:29:41.474 do_verify=1 00:29:41.474 verify=crc32c-intel 00:29:41.474 [job0] 00:29:41.474 filename=/dev/nvme0n1 00:29:41.474 Could not set queue depth (nvme0n1) 00:29:41.474 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:41.474 fio-3.35 00:29:41.474 Starting 1 thread 00:29:42.408 00:29:42.408 job0: (groupid=0, jobs=1): err= 0: pid=2547757: Tue Dec 10 04:17:36 2024 00:29:42.408 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:29:42.408 slat (nsec): min=6680, max=34574, avg=7496.06, stdev=1265.13 00:29:42.408 clat (usec): min=191, max=42079, avg=279.09, stdev=1572.62 00:29:42.408 lat (usec): min=198, max=42097, avg=286.58, stdev=1573.42 00:29:42.408 clat percentiles (usec): 00:29:42.408 | 1.00th=[ 196], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 202], 00:29:42.408 | 30.00th=[ 204], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 208], 00:29:42.408 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 277], 95.00th=[ 281], 00:29:42.408 | 99.00th=[ 293], 99.50th=[ 379], 99.90th=[41157], 99.95th=[41157], 00:29:42.408 | 99.99th=[42206] 00:29:42.408 write: IOPS=2116, BW=8468KiB/s (8671kB/s)(8476KiB/1001msec); 0 zone resets 00:29:42.408 slat (usec): min=8, max=29017, avg=26.26, stdev=630.12 00:29:42.408 clat (usec): min=144, max=327, avg=163.11, stdev=18.06 00:29:42.408 lat (usec): min=154, max=29270, avg=189.38, stdev=632.41 00:29:42.408 clat percentiles (usec): 00:29:42.408 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 153], 00:29:42.408 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:29:42.408 | 70.00th=[ 165], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 194], 00:29:42.408 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 289], 99.95th=[ 297], 00:29:42.408 | 99.99th=[ 326] 00:29:42.408 bw ( KiB/s): min= 8192, max= 8192, per=96.75%, avg=8192.00, stdev= 0.00, samples=1 00:29:42.408 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:42.408 lat (usec) : 250=92.08%, 500=7.78%, 750=0.07% 00:29:42.408 lat (msec) : 50=0.07% 00:29:42.409 cpu : usr=3.30%, sys=5.50%, ctx=4169, majf=0, minf=1 00:29:42.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:42.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.409 issued rwts: total=2048,2119,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:42.409 00:29:42.409 Run status group 0 (all jobs): 00:29:42.409 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:29:42.409 WRITE: bw=8468KiB/s (8671kB/s), 8468KiB/s-8468KiB/s (8671kB/s-8671kB/s), io=8476KiB (8679kB), run=1001-1001msec 00:29:42.409 00:29:42.409 Disk stats (read/write): 00:29:42.409 nvme0n1: ios=1663/2048, merge=0/0, ticks=1427/314, in_queue=1741, util=98.70% 00:29:42.409 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:42.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.669 rmmod nvme_tcp 00:29:42.669 rmmod nvme_fabrics 00:29:42.669 rmmod nvme_keyring 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2547263 ']' 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2547263 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2547263 ']' 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2547263 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2547263 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2547263' 00:29:42.669 killing process with pid 2547263 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2547263 00:29:42.669 04:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2547263 00:29:42.929 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:42.929 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:42.929 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:42.929 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:29:42.929 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:29:42.929 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:29:42.929 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:42.929 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:42.929 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:42.929 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.929 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.929 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:45.471 00:29:45.471 real 0m9.268s 00:29:45.471 user 0m17.106s 00:29:45.471 sys 0m3.521s 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:45.471 ************************************ 00:29:45.471 END TEST nvmf_nmic 00:29:45.471 ************************************ 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:45.471 ************************************ 00:29:45.471 START TEST nvmf_fio_target 00:29:45.471 ************************************ 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:45.471 * Looking for test storage... 00:29:45.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:45.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.471 --rc genhtml_branch_coverage=1 00:29:45.471 --rc genhtml_function_coverage=1 00:29:45.471 --rc genhtml_legend=1 00:29:45.471 --rc geninfo_all_blocks=1 00:29:45.471 --rc geninfo_unexecuted_blocks=1 00:29:45.471 00:29:45.471 ' 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:45.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.471 --rc genhtml_branch_coverage=1 00:29:45.471 --rc genhtml_function_coverage=1 00:29:45.471 --rc genhtml_legend=1 00:29:45.471 --rc geninfo_all_blocks=1 00:29:45.471 --rc geninfo_unexecuted_blocks=1 00:29:45.471 00:29:45.471 ' 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:45.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.471 --rc genhtml_branch_coverage=1 00:29:45.471 --rc genhtml_function_coverage=1 00:29:45.471 --rc genhtml_legend=1 00:29:45.471 --rc geninfo_all_blocks=1 00:29:45.471 --rc geninfo_unexecuted_blocks=1 00:29:45.471 00:29:45.471 ' 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:45.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.471 --rc genhtml_branch_coverage=1 00:29:45.471 --rc genhtml_function_coverage=1 00:29:45.471 --rc genhtml_legend=1 00:29:45.471 --rc geninfo_all_blocks=1 00:29:45.471 --rc geninfo_unexecuted_blocks=1 00:29:45.471 00:29:45.471 ' 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.471 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:29:45.472 04:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:47.376 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.376 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.376 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.376 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:47.377 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:47.377 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:47.377 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:47.377 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:29:47.377 00:29:47.377 --- 10.0.0.2 ping statistics --- 00:29:47.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.377 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:29:47.377 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:29:47.378 00:29:47.378 --- 10.0.0.1 ping statistics --- 00:29:47.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.378 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2549837 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2549837 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2549837 ']' 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.378 04:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:47.638 [2024-12-10 04:17:41.764519] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:47.638 [2024-12-10 04:17:41.765745] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:29:47.638 [2024-12-10 04:17:41.765803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.638 [2024-12-10 04:17:41.836959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:47.638 [2024-12-10 04:17:41.891388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.638 [2024-12-10 04:17:41.891447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.638 [2024-12-10 04:17:41.891475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.638 [2024-12-10 04:17:41.891486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.638 [2024-12-10 04:17:41.891495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.638 [2024-12-10 04:17:41.893080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.638 [2024-12-10 04:17:41.893140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.638 [2024-12-10 04:17:41.893207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.638 [2024-12-10 04:17:41.893209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.638 [2024-12-10 04:17:41.977404] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:47.638 [2024-12-10 04:17:41.977631] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:47.638 [2024-12-10 04:17:41.977920] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:47.638 [2024-12-10 04:17:41.978610] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:47.638 [2024-12-10 04:17:41.978855] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:47.638 04:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.638 04:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:29:47.638 04:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:47.638 04:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:47.638 04:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:47.896 04:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.896 04:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:48.154 [2024-12-10 04:17:42.289936] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.154 04:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:48.412 04:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:29:48.412 04:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:48.670 04:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:29:48.670 04:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:48.927 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:29:48.927 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:49.186 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:29:49.187 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:29:49.447 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:49.707 04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:29:49.707 04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:50.274 04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:29:50.274 04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:50.274 04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:29:50.274 04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:29:50.841 04:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:50.841 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:50.841 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:51.098 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:51.098 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:51.664 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:51.664 [2024-12-10 04:17:46.042077] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.923 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:29:52.181 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:29:52.439 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:52.439 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:29:52.439 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:29:52.439 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:52.439 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:29:52.439 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:29:52.439 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:29:54.973 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:29:54.973 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:29:54.973 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:29:54.973 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:29:54.973 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:29:54.973 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:29:54.973 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:54.973 [global] 00:29:54.973 thread=1 00:29:54.973 invalidate=1 00:29:54.973 rw=write 00:29:54.973 time_based=1 00:29:54.973 runtime=1 00:29:54.973 ioengine=libaio 00:29:54.973 direct=1 00:29:54.973 bs=4096 00:29:54.973 iodepth=1 00:29:54.973 norandommap=0 00:29:54.973 numjobs=1 00:29:54.973 00:29:54.973 verify_dump=1 00:29:54.973 verify_backlog=512 00:29:54.973 verify_state_save=0 00:29:54.973 do_verify=1 00:29:54.973 verify=crc32c-intel 00:29:54.973 [job0] 00:29:54.973 filename=/dev/nvme0n1 00:29:54.973 [job1] 00:29:54.973 filename=/dev/nvme0n2 00:29:54.973 [job2] 00:29:54.973 filename=/dev/nvme0n3 00:29:54.973 [job3] 00:29:54.973 filename=/dev/nvme0n4 00:29:54.973 Could not set queue depth (nvme0n1) 00:29:54.973 Could not set queue depth (nvme0n2) 00:29:54.973 Could not set queue depth (nvme0n3) 00:29:54.973 Could not set queue depth (nvme0n4) 00:29:54.973 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:54.973 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:54.973 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:54.973 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:54.973 fio-3.35 00:29:54.973 Starting 4 threads 00:29:55.911 00:29:55.911 job0: (groupid=0, jobs=1): err= 0: pid=2550797: Tue Dec 10 04:17:50 2024 00:29:55.911 read: IOPS=1287, BW=5150KiB/s (5273kB/s)(5196KiB/1009msec) 00:29:55.911 slat (nsec): min=5045, max=79149, avg=22288.60, stdev=11461.05 00:29:55.911 clat (usec): min=208, max=41478, avg=465.50, stdev=1634.96 00:29:55.911 lat (usec): min=223, max=41494, avg=487.79, stdev=1634.78 00:29:55.911 clat percentiles (usec): 00:29:55.911 | 1.00th=[ 221], 5.00th=[ 237], 10.00th=[ 262], 20.00th=[ 302], 00:29:55.911 | 30.00th=[ 334], 40.00th=[ 363], 50.00th=[ 383], 60.00th=[ 420], 00:29:55.911 | 70.00th=[ 453], 80.00th=[ 482], 90.00th=[ 510], 95.00th=[ 545], 00:29:55.911 | 99.00th=[ 586], 99.50th=[ 644], 99.90th=[41157], 99.95th=[41681], 00:29:55.911 | 99.99th=[41681] 00:29:55.911 write: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec); 0 zone resets 00:29:55.911 slat (nsec): min=6602, max=64000, avg=14773.28, stdev=6558.05 00:29:55.911 clat (usec): min=153, max=420, avg=219.50, stdev=30.11 00:29:55.911 lat (usec): min=178, max=439, avg=234.27, stdev=30.93 00:29:55.911 clat percentiles (usec): 00:29:55.911 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 198], 00:29:55.912 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:29:55.912 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 260], 00:29:55.912 | 99.00th=[ 359], 99.50th=[ 383], 99.90th=[ 416], 99.95th=[ 420], 00:29:55.912 | 99.99th=[ 420] 00:29:55.912 bw ( KiB/s): min= 4096, max= 8192, per=30.63%, avg=6144.00, stdev=2896.31, samples=2 00:29:55.912 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:29:55.912 lat (usec) : 250=52.91%, 500=41.16%, 750=5.75% 00:29:55.912 lat (msec) : 4=0.04%, 10=0.04%, 20=0.04%, 50=0.07% 00:29:55.912 cpu : usr=1.79%, sys=6.25%, ctx=2837, majf=0, minf=1 00:29:55.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:55.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.912 issued rwts: total=1299,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:55.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:55.912 job1: (groupid=0, jobs=1): err= 0: pid=2550813: Tue Dec 10 04:17:50 2024 00:29:55.912 read: IOPS=321, BW=1285KiB/s (1316kB/s)(1308KiB/1018msec) 00:29:55.912 slat (nsec): min=6513, max=38778, avg=15011.48, stdev=6668.61 00:29:55.912 clat (usec): min=214, max=41351, avg=2755.40, stdev=9761.64 00:29:55.912 lat (usec): min=221, max=41373, avg=2770.42, stdev=9762.59 00:29:55.912 clat percentiles (usec): 00:29:55.912 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 245], 00:29:55.912 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:29:55.912 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[40633], 00:29:55.912 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:55.912 | 99.99th=[41157] 00:29:55.912 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:29:55.912 slat (nsec): min=7937, max=59903, avg=15091.44, stdev=7558.00 00:29:55.912 clat (usec): min=150, max=257, avg=196.42, stdev=20.46 00:29:55.912 lat (usec): min=159, max=293, avg=211.52, stdev=24.92 00:29:55.912 clat percentiles (usec): 00:29:55.912 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 178], 00:29:55.912 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:29:55.912 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 229], 00:29:55.912 | 99.00th=[ 247], 99.50th=[ 251], 99.90th=[ 258], 99.95th=[ 258], 00:29:55.912 | 99.99th=[ 258] 00:29:55.912 bw ( KiB/s): min= 4096, max= 4096, per=20.42%, avg=4096.00, stdev= 0.00, samples=1 00:29:55.912 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:55.912 lat (usec) : 250=70.56%, 500=26.94%, 1000=0.12% 00:29:55.912 lat (msec) : 50=2.38% 00:29:55.912 cpu : usr=1.28%, sys=1.28%, ctx=842, majf=0, minf=2 00:29:55.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:55.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.912 issued rwts: total=327,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:55.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:55.912 job2: (groupid=0, jobs=1): err= 0: pid=2550849: Tue Dec 10 04:17:50 2024 00:29:55.912 read: IOPS=1269, BW=5077KiB/s (5199kB/s)(5184KiB/1021msec) 00:29:55.912 slat (nsec): min=5584, max=66815, avg=21502.34, stdev=11063.19 00:29:55.912 clat (usec): min=208, max=40973, avg=484.50, stdev=1945.83 00:29:55.912 lat (usec): min=217, max=40988, avg=506.00, stdev=1945.66 00:29:55.912 clat percentiles (usec): 00:29:55.912 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 249], 20.00th=[ 285], 00:29:55.912 | 30.00th=[ 330], 40.00th=[ 367], 50.00th=[ 392], 60.00th=[ 429], 00:29:55.912 | 70.00th=[ 461], 80.00th=[ 482], 90.00th=[ 519], 95.00th=[ 545], 00:29:55.912 | 99.00th=[ 603], 99.50th=[ 644], 99.90th=[41157], 99.95th=[41157], 00:29:55.912 | 99.99th=[41157] 00:29:55.912 write: IOPS=1504, BW=6018KiB/s (6162kB/s)(6144KiB/1021msec); 0 zone resets 00:29:55.912 slat (nsec): min=6157, max=49919, avg=13118.81, stdev=5055.76 00:29:55.912 clat (usec): min=135, max=401, avg=215.75, stdev=25.95 00:29:55.912 lat (usec): min=145, max=410, avg=228.87, stdev=25.84 00:29:55.912 clat percentiles (usec): 00:29:55.912 | 1.00th=[ 155], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 196], 00:29:55.912 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 221], 00:29:55.912 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 255], 00:29:55.912 | 99.00th=[ 285], 99.50th=[ 314], 99.90th=[ 375], 99.95th=[ 400], 00:29:55.912 | 99.99th=[ 400] 00:29:55.912 bw ( KiB/s): min= 4096, max= 8192, per=30.63%, avg=6144.00, stdev=2896.31, samples=2 00:29:55.912 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:29:55.912 lat (usec) : 250=54.77%, 500=38.88%, 750=6.25% 00:29:55.912 lat (msec) : 50=0.11% 00:29:55.912 cpu : usr=2.16%, sys=5.29%, ctx=2832, majf=0, minf=2 00:29:55.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:55.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.912 issued rwts: total=1296,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:55.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:55.912 job3: (groupid=0, jobs=1): err= 0: pid=2550859: Tue Dec 10 04:17:50 2024 00:29:55.912 read: IOPS=1340, BW=5364KiB/s (5492kB/s)(5412KiB/1009msec) 00:29:55.912 slat (nsec): min=5176, max=67632, avg=17624.37, stdev=9255.90 00:29:55.912 clat (usec): min=233, max=42324, avg=466.97, stdev=2511.99 00:29:55.912 lat (usec): min=241, max=42334, avg=484.60, stdev=2511.62 00:29:55.912 clat percentiles (usec): 00:29:55.912 | 1.00th=[ 245], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 262], 00:29:55.912 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 306], 00:29:55.912 | 70.00th=[ 330], 80.00th=[ 355], 90.00th=[ 388], 95.00th=[ 441], 00:29:55.912 | 99.00th=[ 498], 99.50th=[ 3195], 99.90th=[41681], 99.95th=[42206], 00:29:55.912 | 99.99th=[42206] 00:29:55.912 write: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec); 0 zone resets 00:29:55.912 slat (nsec): min=6838, max=62235, avg=16277.13, stdev=5993.09 00:29:55.912 clat (usec): min=166, max=997, avg=204.57, stdev=38.25 00:29:55.912 lat (usec): min=176, max=1018, avg=220.85, stdev=38.53 00:29:55.912 clat percentiles (usec): 00:29:55.912 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:29:55.912 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:29:55.912 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 237], 00:29:55.912 | 99.00th=[ 297], 99.50th=[ 347], 99.90th=[ 824], 99.95th=[ 996], 00:29:55.912 | 99.99th=[ 996] 00:29:55.912 bw ( KiB/s): min= 4096, max= 8192, per=30.63%, avg=6144.00, stdev=2896.31, samples=2 00:29:55.912 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:29:55.912 lat (usec) : 250=53.82%, 500=45.62%, 750=0.24%, 1000=0.07% 00:29:55.912 lat (msec) : 4=0.03%, 10=0.03%, 50=0.17% 00:29:55.912 cpu : usr=2.38%, sys=5.26%, ctx=2890, majf=0, minf=1 00:29:55.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:55.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.912 issued rwts: total=1353,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:55.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:55.912 00:29:55.912 Run status group 0 (all jobs): 00:29:55.912 READ: bw=16.4MiB/s (17.1MB/s), 1285KiB/s-5364KiB/s (1316kB/s-5492kB/s), io=16.7MiB (17.5MB), run=1009-1021msec 00:29:55.912 WRITE: bw=19.6MiB/s (20.5MB/s), 2012KiB/s-6089KiB/s (2060kB/s-6235kB/s), io=20.0MiB (21.0MB), run=1009-1021msec 00:29:55.912 00:29:55.912 Disk stats (read/write): 00:29:55.912 nvme0n1: ios=1139/1536, merge=0/0, ticks=714/320, in_queue=1034, util=98.40% 00:29:55.912 nvme0n2: ios=344/512, merge=0/0, ticks=1652/94, in_queue=1746, util=97.97% 00:29:55.912 nvme0n3: ios=1130/1536, merge=0/0, ticks=435/331, in_queue=766, util=88.78% 00:29:55.912 nvme0n4: ios=1288/1536, merge=0/0, ticks=692/293, in_queue=985, util=98.41% 00:29:55.912 04:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:29:55.912 [global] 00:29:55.912 thread=1 00:29:55.912 invalidate=1 00:29:55.912 rw=randwrite 00:29:55.912 time_based=1 00:29:55.912 runtime=1 00:29:55.912 ioengine=libaio 00:29:55.912 direct=1 00:29:55.912 bs=4096 00:29:55.912 iodepth=1 00:29:55.912 norandommap=0 00:29:55.912 numjobs=1 00:29:55.912 00:29:55.912 verify_dump=1 00:29:55.912 verify_backlog=512 00:29:55.912 verify_state_save=0 00:29:55.912 do_verify=1 00:29:55.912 verify=crc32c-intel 00:29:55.912 [job0] 00:29:55.912 filename=/dev/nvme0n1 00:29:55.912 [job1] 00:29:55.912 filename=/dev/nvme0n2 00:29:55.912 [job2] 00:29:55.912 filename=/dev/nvme0n3 00:29:55.912 [job3] 00:29:55.912 filename=/dev/nvme0n4 00:29:55.912 Could not set queue depth (nvme0n1) 00:29:55.912 Could not set queue depth (nvme0n2) 00:29:55.912 Could not set queue depth (nvme0n3) 00:29:55.913 Could not set queue depth (nvme0n4) 00:29:56.170 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:56.170 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:56.170 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:56.170 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:56.170 fio-3.35 00:29:56.171 Starting 4 threads 00:29:57.549 00:29:57.549 job0: (groupid=0, jobs=1): err= 0: pid=2551127: Tue Dec 10 04:17:51 2024 00:29:57.549 read: IOPS=21, BW=85.0KiB/s (87.1kB/s)(88.0KiB/1035msec) 00:29:57.549 slat (nsec): min=8490, max=22459, avg=13257.73, stdev=2431.95 00:29:57.549 clat (usec): min=40883, max=41600, avg=41032.10, stdev=161.54 00:29:57.549 lat (usec): min=40906, max=41613, avg=41045.36, stdev=161.28 00:29:57.549 clat percentiles (usec): 00:29:57.549 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:57.549 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:57.549 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:57.549 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:29:57.549 | 99.99th=[41681] 00:29:57.549 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:29:57.549 slat (nsec): min=7976, max=70179, avg=10708.52, stdev=4296.15 00:29:57.549 clat (usec): min=191, max=304, avg=242.61, stdev= 9.90 00:29:57.549 lat (usec): min=201, max=332, avg=253.32, stdev=10.27 00:29:57.549 clat percentiles (usec): 00:29:57.549 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:29:57.549 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 243], 00:29:57.549 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 260], 00:29:57.549 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 306], 99.95th=[ 306], 00:29:57.549 | 99.99th=[ 306] 00:29:57.549 bw ( KiB/s): min= 4096, max= 4096, per=25.88%, avg=4096.00, stdev= 0.00, samples=1 00:29:57.549 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:57.549 lat (usec) : 250=79.59%, 500=16.29% 00:29:57.549 lat (msec) : 50=4.12% 00:29:57.549 cpu : usr=0.10%, sys=0.58%, ctx=535, majf=0, minf=1 00:29:57.549 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:57.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.549 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:57.549 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:57.550 job1: (groupid=0, jobs=1): err= 0: pid=2551128: Tue Dec 10 04:17:51 2024 00:29:57.550 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:29:57.550 slat (nsec): min=7477, max=34588, avg=20785.00, stdev=9438.69 00:29:57.550 clat (usec): min=40625, max=43994, avg=41091.24, stdev=654.07 00:29:57.550 lat (usec): min=40632, max=44011, avg=41112.03, stdev=653.38 00:29:57.550 clat percentiles (usec): 00:29:57.550 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:29:57.550 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:57.550 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:57.550 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:29:57.550 | 99.99th=[43779] 00:29:57.550 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:29:57.550 slat (nsec): min=7212, max=28861, avg=8496.28, stdev=2368.84 00:29:57.550 clat (usec): min=195, max=267, avg=226.14, stdev=10.72 00:29:57.550 lat (usec): min=204, max=275, avg=234.64, stdev=10.58 00:29:57.550 clat percentiles (usec): 00:29:57.550 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:29:57.550 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 229], 00:29:57.550 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 241], 95.00th=[ 245], 00:29:57.550 | 99.00th=[ 251], 99.50th=[ 262], 99.90th=[ 269], 99.95th=[ 269], 00:29:57.550 | 99.99th=[ 269] 00:29:57.550 bw ( KiB/s): min= 4096, max= 4096, per=25.88%, avg=4096.00, stdev= 0.00, samples=1 00:29:57.550 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:57.550 lat (usec) : 250=94.38%, 500=1.50% 00:29:57.550 lat (msec) : 50=4.12% 00:29:57.550 cpu : usr=0.20%, sys=0.68%, ctx=534, majf=0, minf=1 00:29:57.550 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:57.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.550 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:57.550 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:57.550 job2: (groupid=0, jobs=1): err= 0: pid=2551129: Tue Dec 10 04:17:51 2024 00:29:57.550 read: IOPS=2000, BW=8000KiB/s (8192kB/s)(8008KiB/1001msec) 00:29:57.550 slat (nsec): min=4271, max=50784, avg=9327.62, stdev=5541.21 00:29:57.550 clat (usec): min=201, max=650, avg=263.36, stdev=55.35 00:29:57.550 lat (usec): min=215, max=667, avg=272.69, stdev=58.76 00:29:57.550 clat percentiles (usec): 00:29:57.550 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 227], 00:29:57.550 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 262], 00:29:57.550 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 351], 00:29:57.550 | 99.00th=[ 494], 99.50th=[ 545], 99.90th=[ 652], 99.95th=[ 652], 00:29:57.550 | 99.99th=[ 652] 00:29:57.550 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:29:57.550 slat (nsec): min=6608, max=52799, avg=13091.67, stdev=5923.12 00:29:57.550 clat (usec): min=143, max=369, avg=202.05, stdev=35.10 00:29:57.550 lat (usec): min=159, max=388, avg=215.15, stdev=35.72 00:29:57.550 clat percentiles (usec): 00:29:57.550 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:29:57.550 | 30.00th=[ 172], 40.00th=[ 182], 50.00th=[ 200], 60.00th=[ 210], 00:29:57.550 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 265], 00:29:57.550 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 310], 99.95th=[ 310], 00:29:57.550 | 99.99th=[ 371] 00:29:57.550 bw ( KiB/s): min= 8512, max= 8512, per=53.77%, avg=8512.00, stdev= 0.00, samples=1 00:29:57.550 iops : min= 2128, max= 2128, avg=2128.00, stdev= 0.00, samples=1 00:29:57.550 lat (usec) : 250=71.65%, 500=27.88%, 750=0.47% 00:29:57.550 cpu : usr=3.70%, sys=5.20%, ctx=4050, majf=0, minf=1 00:29:57.550 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:57.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.550 issued rwts: total=2002,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:57.550 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:57.550 job3: (groupid=0, jobs=1): err= 0: pid=2551131: Tue Dec 10 04:17:51 2024 00:29:57.550 read: IOPS=511, BW=2048KiB/s (2097kB/s)(2056KiB/1004msec) 00:29:57.550 slat (nsec): min=5642, max=36222, avg=9083.66, stdev=4763.34 00:29:57.550 clat (usec): min=220, max=41084, avg=1519.14, stdev=7064.51 00:29:57.550 lat (usec): min=226, max=41098, avg=1528.22, stdev=7066.37 00:29:57.550 clat percentiles (usec): 00:29:57.550 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 231], 20.00th=[ 233], 00:29:57.550 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 249], 00:29:57.550 | 70.00th=[ 258], 80.00th=[ 277], 90.00th=[ 314], 95.00th=[ 338], 00:29:57.550 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:57.550 | 99.99th=[41157] 00:29:57.550 write: IOPS=1019, BW=4080KiB/s (4178kB/s)(4096KiB/1004msec); 0 zone resets 00:29:57.550 slat (nsec): min=6007, max=39653, avg=10344.43, stdev=4572.98 00:29:57.550 clat (usec): min=143, max=427, avg=198.96, stdev=32.76 00:29:57.550 lat (usec): min=153, max=452, avg=209.31, stdev=30.41 00:29:57.550 clat percentiles (usec): 00:29:57.550 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:29:57.550 | 30.00th=[ 169], 40.00th=[ 178], 50.00th=[ 204], 60.00th=[ 221], 00:29:57.550 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 237], 95.00th=[ 243], 00:29:57.550 | 99.00th=[ 253], 99.50th=[ 258], 99.90th=[ 383], 99.95th=[ 429], 00:29:57.550 | 99.99th=[ 429] 00:29:57.550 bw ( KiB/s): min= 4096, max= 4096, per=25.88%, avg=4096.00, stdev= 0.00, samples=2 00:29:57.550 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:29:57.550 lat (usec) : 250=86.54%, 500=12.42% 00:29:57.550 lat (msec) : 50=1.04% 00:29:57.550 cpu : usr=1.00%, sys=1.30%, ctx=1538, majf=0, minf=1 00:29:57.550 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:57.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.550 issued rwts: total=514,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:57.550 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:57.550 00:29:57.550 Run status group 0 (all jobs): 00:29:57.550 READ: bw=9894KiB/s (10.1MB/s), 85.0KiB/s-8000KiB/s (87.1kB/s-8192kB/s), io=10.0MiB (10.5MB), run=1001-1035msec 00:29:57.550 WRITE: bw=15.5MiB/s (16.2MB/s), 1979KiB/s-8184KiB/s (2026kB/s-8380kB/s), io=16.0MiB (16.8MB), run=1001-1035msec 00:29:57.550 00:29:57.550 Disk stats (read/write): 00:29:57.550 nvme0n1: ios=67/512, merge=0/0, ticks=718/123, in_queue=841, util=86.57% 00:29:57.550 nvme0n2: ios=33/512, merge=0/0, ticks=708/111, in_queue=819, util=86.69% 00:29:57.550 nvme0n3: ios=1559/1929, merge=0/0, ticks=493/367, in_queue=860, util=91.64% 00:29:57.550 nvme0n4: ios=512/627, merge=0/0, ticks=698/127, in_queue=825, util=89.57% 00:29:57.550 04:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:29:57.550 [global] 00:29:57.550 thread=1 00:29:57.550 invalidate=1 00:29:57.550 rw=write 00:29:57.550 time_based=1 00:29:57.550 runtime=1 00:29:57.550 ioengine=libaio 00:29:57.550 direct=1 00:29:57.550 bs=4096 00:29:57.550 iodepth=128 00:29:57.550 norandommap=0 00:29:57.550 numjobs=1 00:29:57.550 00:29:57.550 verify_dump=1 00:29:57.550 verify_backlog=512 00:29:57.550 verify_state_save=0 00:29:57.550 do_verify=1 00:29:57.550 verify=crc32c-intel 00:29:57.550 [job0] 00:29:57.550 filename=/dev/nvme0n1 00:29:57.550 [job1] 00:29:57.550 filename=/dev/nvme0n2 00:29:57.550 [job2] 00:29:57.550 filename=/dev/nvme0n3 00:29:57.550 [job3] 00:29:57.550 filename=/dev/nvme0n4 00:29:57.550 Could not set queue depth (nvme0n1) 00:29:57.550 Could not set queue depth (nvme0n2) 00:29:57.550 Could not set queue depth (nvme0n3) 00:29:57.550 Could not set queue depth (nvme0n4) 00:29:57.808 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:57.808 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:57.808 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:57.808 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:57.808 fio-3.35 00:29:57.808 Starting 4 threads 00:29:59.182 00:29:59.182 job0: (groupid=0, jobs=1): err= 0: pid=2551360: Tue Dec 10 04:17:53 2024 00:29:59.182 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:29:59.182 slat (usec): min=2, max=11602, avg=121.31, stdev=692.96 00:29:59.182 clat (usec): min=7659, max=32028, avg=16040.52, stdev=3780.95 00:29:59.182 lat (usec): min=7904, max=32031, avg=16161.83, stdev=3786.09 00:29:59.182 clat percentiles (usec): 00:29:59.182 | 1.00th=[ 9241], 5.00th=[10683], 10.00th=[11863], 20.00th=[12911], 00:29:59.182 | 30.00th=[13566], 40.00th=[14091], 50.00th=[15401], 60.00th=[16712], 00:29:59.182 | 70.00th=[18220], 80.00th=[19268], 90.00th=[21103], 95.00th=[22414], 00:29:59.183 | 99.00th=[25297], 99.50th=[27395], 99.90th=[28967], 99.95th=[32113], 00:29:59.183 | 99.99th=[32113] 00:29:59.183 write: IOPS=3992, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1004msec); 0 zone resets 00:29:59.183 slat (usec): min=3, max=28055, avg=136.49, stdev=826.25 00:29:59.183 clat (usec): min=3631, max=67877, avg=17431.44, stdev=8578.96 00:29:59.183 lat (usec): min=4662, max=67883, avg=17567.93, stdev=8623.64 00:29:59.183 clat percentiles (usec): 00:29:59.183 | 1.00th=[ 8455], 5.00th=[10290], 10.00th=[11338], 20.00th=[12518], 00:29:59.183 | 30.00th=[13829], 40.00th=[14222], 50.00th=[15926], 60.00th=[17695], 00:29:59.183 | 70.00th=[18482], 80.00th=[19530], 90.00th=[22676], 95.00th=[26608], 00:29:59.183 | 99.00th=[60556], 99.50th=[65274], 99.90th=[67634], 99.95th=[67634], 00:29:59.183 | 99.99th=[67634] 00:29:59.183 bw ( KiB/s): min=14664, max=16416, per=26.48%, avg=15540.00, stdev=1238.85, samples=2 00:29:59.183 iops : min= 3666, max= 4104, avg=3885.00, stdev=309.71, samples=2 00:29:59.183 lat (msec) : 4=0.01%, 10=2.79%, 20=80.30%, 50=15.44%, 100=1.46% 00:29:59.183 cpu : usr=1.99%, sys=3.59%, ctx=413, majf=0, minf=1 00:29:59.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:59.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:59.183 issued rwts: total=3584,4008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:59.183 job1: (groupid=0, jobs=1): err= 0: pid=2551361: Tue Dec 10 04:17:53 2024 00:29:59.183 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:29:59.183 slat (usec): min=2, max=6675, avg=111.77, stdev=591.01 00:29:59.183 clat (usec): min=8479, max=22712, avg=14243.84, stdev=2946.57 00:29:59.183 lat (usec): min=8974, max=22720, avg=14355.60, stdev=2930.44 00:29:59.183 clat percentiles (usec): 00:29:59.183 | 1.00th=[ 9110], 5.00th=[11076], 10.00th=[11207], 20.00th=[11600], 00:29:59.183 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13566], 60.00th=[14353], 00:29:59.183 | 70.00th=[15401], 80.00th=[16909], 90.00th=[18482], 95.00th=[20579], 00:29:59.183 | 99.00th=[21365], 99.50th=[21890], 99.90th=[22676], 99.95th=[22676], 00:29:59.183 | 99.99th=[22676] 00:29:59.183 write: IOPS=4338, BW=16.9MiB/s (17.8MB/s)(17.0MiB/1005msec); 0 zone resets 00:29:59.183 slat (usec): min=3, max=18258, avg=119.68, stdev=685.85 00:29:59.183 clat (usec): min=3693, max=53528, avg=15708.02, stdev=7096.75 00:29:59.183 lat (usec): min=4723, max=53535, avg=15827.70, stdev=7118.45 00:29:59.183 clat percentiles (usec): 00:29:59.183 | 1.00th=[ 7177], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10945], 00:29:59.183 | 30.00th=[11994], 40.00th=[12518], 50.00th=[14222], 60.00th=[15664], 00:29:59.183 | 70.00th=[17433], 80.00th=[18744], 90.00th=[21890], 95.00th=[26608], 00:29:59.183 | 99.00th=[53216], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:29:59.183 | 99.99th=[53740] 00:29:59.183 bw ( KiB/s): min=14256, max=19608, per=28.85%, avg=16932.00, stdev=3784.44, samples=2 00:29:59.183 iops : min= 3564, max= 4902, avg=4233.00, stdev=946.11, samples=2 00:29:59.183 lat (msec) : 4=0.01%, 10=6.66%, 20=84.08%, 50=8.50%, 100=0.75% 00:29:59.183 cpu : usr=2.09%, sys=4.18%, ctx=433, majf=0, minf=1 00:29:59.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:29:59.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:59.183 issued rwts: total=4096,4360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:59.183 job2: (groupid=0, jobs=1): err= 0: pid=2551362: Tue Dec 10 04:17:53 2024 00:29:59.183 read: IOPS=3519, BW=13.7MiB/s (14.4MB/s)(14.4MiB/1046msec) 00:29:59.183 slat (usec): min=3, max=9252, avg=123.95, stdev=797.95 00:29:59.183 clat (usec): min=8260, max=63227, avg=16392.85, stdev=7117.77 00:29:59.183 lat (usec): min=10141, max=63238, avg=16516.80, stdev=7165.09 00:29:59.183 clat percentiles (usec): 00:29:59.183 | 1.00th=[10159], 5.00th=[11076], 10.00th=[11731], 20.00th=[12256], 00:29:59.183 | 30.00th=[13566], 40.00th=[14222], 50.00th=[15008], 60.00th=[15533], 00:29:59.183 | 70.00th=[16581], 80.00th=[17957], 90.00th=[19792], 95.00th=[25297], 00:29:59.183 | 99.00th=[54264], 99.50th=[58459], 99.90th=[63177], 99.95th=[63177], 00:29:59.183 | 99.99th=[63177] 00:29:59.183 write: IOPS=3915, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1046msec); 0 zone resets 00:29:59.183 slat (usec): min=3, max=9727, avg=127.51, stdev=711.77 00:29:59.183 clat (usec): min=5955, max=68280, avg=17540.25, stdev=7736.91 00:29:59.183 lat (usec): min=5965, max=68292, avg=17667.75, stdev=7791.74 00:29:59.183 clat percentiles (usec): 00:29:59.183 | 1.00th=[10814], 5.00th=[11600], 10.00th=[12125], 20.00th=[12518], 00:29:59.183 | 30.00th=[13042], 40.00th=[13566], 50.00th=[14877], 60.00th=[15401], 00:29:59.183 | 70.00th=[16712], 80.00th=[22152], 90.00th=[29754], 95.00th=[31589], 00:29:59.183 | 99.00th=[39584], 99.50th=[63177], 99.90th=[68682], 99.95th=[68682], 00:29:59.183 | 99.99th=[68682] 00:29:59.183 bw ( KiB/s): min=14224, max=18296, per=27.71%, avg=16260.00, stdev=2879.34, samples=2 00:29:59.183 iops : min= 3556, max= 4574, avg=4065.00, stdev=719.83, samples=2 00:29:59.183 lat (msec) : 10=0.49%, 20=82.86%, 50=15.44%, 100=1.21% 00:29:59.183 cpu : usr=3.54%, sys=4.59%, ctx=319, majf=0, minf=1 00:29:59.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:59.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:59.183 issued rwts: total=3681,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:59.183 job3: (groupid=0, jobs=1): err= 0: pid=2551363: Tue Dec 10 04:17:53 2024 00:29:59.183 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:29:59.183 slat (usec): min=2, max=44133, avg=174.78, stdev=1235.26 00:29:59.183 clat (usec): min=10942, max=56032, avg=22061.96, stdev=10585.84 00:29:59.183 lat (usec): min=10946, max=56039, avg=22236.74, stdev=10611.42 00:29:59.183 clat percentiles (usec): 00:29:59.183 | 1.00th=[12256], 5.00th=[13435], 10.00th=[14484], 20.00th=[15533], 00:29:59.183 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16909], 60.00th=[18482], 00:29:59.183 | 70.00th=[21365], 80.00th=[31065], 90.00th=[36963], 95.00th=[39060], 00:29:59.183 | 99.00th=[55837], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:29:59.183 | 99.99th=[55837] 00:29:59.183 write: IOPS=2866, BW=11.2MiB/s (11.7MB/s)(11.3MiB/1005msec); 0 zone resets 00:29:59.183 slat (usec): min=3, max=20635, avg=188.32, stdev=1188.96 00:29:59.183 clat (usec): min=570, max=68955, avg=24462.41, stdev=9938.48 00:29:59.183 lat (usec): min=4954, max=68972, avg=24650.73, stdev=10010.19 00:29:59.183 clat percentiles (usec): 00:29:59.183 | 1.00th=[ 5407], 5.00th=[11076], 10.00th=[14353], 20.00th=[16581], 00:29:59.183 | 30.00th=[17433], 40.00th=[21103], 50.00th=[23462], 60.00th=[25822], 00:29:59.183 | 70.00th=[27919], 80.00th=[30802], 90.00th=[39060], 95.00th=[43254], 00:29:59.183 | 99.00th=[53216], 99.50th=[53216], 99.90th=[58459], 99.95th=[60031], 00:29:59.183 | 99.99th=[68682] 00:29:59.183 bw ( KiB/s): min= 9736, max=12288, per=18.77%, avg=11012.00, stdev=1804.54, samples=2 00:29:59.183 iops : min= 2434, max= 3072, avg=2753.00, stdev=451.13, samples=2 00:29:59.183 lat (usec) : 750=0.02% 00:29:59.183 lat (msec) : 10=1.25%, 20=50.12%, 50=45.25%, 100=3.36% 00:29:59.183 cpu : usr=1.29%, sys=2.99%, ctx=256, majf=0, minf=2 00:29:59.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:59.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:59.183 issued rwts: total=2560,2881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:59.183 00:29:59.183 Run status group 0 (all jobs): 00:29:59.183 READ: bw=52.0MiB/s (54.5MB/s), 9.95MiB/s-15.9MiB/s (10.4MB/s-16.7MB/s), io=54.4MiB (57.0MB), run=1004-1046msec 00:29:59.183 WRITE: bw=57.3MiB/s (60.1MB/s), 11.2MiB/s-16.9MiB/s (11.7MB/s-17.8MB/s), io=59.9MiB (62.9MB), run=1004-1046msec 00:29:59.183 00:29:59.183 Disk stats (read/write): 00:29:59.183 nvme0n1: ios=3117/3495, merge=0/0, ticks=19475/23297, in_queue=42772, util=96.19% 00:29:59.183 nvme0n2: ios=3628/3863, merge=0/0, ticks=12853/15298, in_queue=28151, util=96.85% 00:29:59.183 nvme0n3: ios=3352/3584, merge=0/0, ticks=24717/26891, in_queue=51608, util=93.13% 00:29:59.183 nvme0n4: ios=2105/2368, merge=0/0, ticks=14059/17982, in_queue=32041, util=98.11% 00:29:59.183 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:29:59.183 [global] 00:29:59.183 thread=1 00:29:59.183 invalidate=1 00:29:59.183 rw=randwrite 00:29:59.183 time_based=1 00:29:59.183 runtime=1 00:29:59.183 ioengine=libaio 00:29:59.183 direct=1 00:29:59.183 bs=4096 00:29:59.183 iodepth=128 00:29:59.183 norandommap=0 00:29:59.183 numjobs=1 00:29:59.183 00:29:59.183 verify_dump=1 00:29:59.183 verify_backlog=512 00:29:59.183 verify_state_save=0 00:29:59.183 do_verify=1 00:29:59.183 verify=crc32c-intel 00:29:59.183 [job0] 00:29:59.183 filename=/dev/nvme0n1 00:29:59.183 [job1] 00:29:59.183 filename=/dev/nvme0n2 00:29:59.183 [job2] 00:29:59.183 filename=/dev/nvme0n3 00:29:59.183 [job3] 00:29:59.183 filename=/dev/nvme0n4 00:29:59.183 Could not set queue depth (nvme0n1) 00:29:59.183 Could not set queue depth (nvme0n2) 00:29:59.183 Could not set queue depth (nvme0n3) 00:29:59.183 Could not set queue depth (nvme0n4) 00:29:59.183 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:59.183 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:59.183 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:59.183 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:59.183 fio-3.35 00:29:59.183 Starting 4 threads 00:30:00.558 00:30:00.558 job0: (groupid=0, jobs=1): err= 0: pid=2551587: Tue Dec 10 04:17:54 2024 00:30:00.558 read: IOPS=2850, BW=11.1MiB/s (11.7MB/s)(11.2MiB/1004msec) 00:30:00.558 slat (usec): min=2, max=43828, avg=194.77, stdev=1561.77 00:30:00.558 clat (usec): min=2134, max=62568, avg=23086.27, stdev=14565.10 00:30:00.558 lat (usec): min=6682, max=62573, avg=23281.03, stdev=14621.50 00:30:00.558 clat percentiles (usec): 00:30:00.558 | 1.00th=[ 6915], 5.00th=[10552], 10.00th=[11863], 20.00th=[12649], 00:30:00.558 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13960], 60.00th=[16909], 00:30:00.559 | 70.00th=[30540], 80.00th=[35914], 90.00th=[51119], 95.00th=[54264], 00:30:00.559 | 99.00th=[62653], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:30:00.559 | 99.99th=[62653] 00:30:00.559 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:30:00.559 slat (usec): min=3, max=12155, avg=139.30, stdev=759.30 00:30:00.559 clat (usec): min=8123, max=54462, avg=19925.35, stdev=11650.76 00:30:00.559 lat (usec): min=8136, max=54466, avg=20064.66, stdev=11701.35 00:30:00.559 clat percentiles (usec): 00:30:00.559 | 1.00th=[ 8717], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11600], 00:30:00.559 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13435], 60.00th=[15664], 00:30:00.559 | 70.00th=[22414], 80.00th=[31589], 90.00th=[38011], 95.00th=[44303], 00:30:00.559 | 99.00th=[51119], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264], 00:30:00.559 | 99.99th=[54264] 00:30:00.559 bw ( KiB/s): min=11880, max=12696, per=19.34%, avg=12288.00, stdev=577.00, samples=2 00:30:00.559 iops : min= 2970, max= 3174, avg=3072.00, stdev=144.25, samples=2 00:30:00.559 lat (msec) : 4=0.02%, 10=5.54%, 20=60.77%, 50=27.59%, 100=6.08% 00:30:00.559 cpu : usr=1.89%, sys=2.89%, ctx=270, majf=0, minf=1 00:30:00.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:30:00.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:00.559 issued rwts: total=2862,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.559 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:00.559 job1: (groupid=0, jobs=1): err= 0: pid=2551588: Tue Dec 10 04:17:54 2024 00:30:00.559 read: IOPS=5004, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1004msec) 00:30:00.559 slat (nsec): min=1938, max=10006k, avg=96739.29, stdev=542897.43 00:30:00.559 clat (usec): min=555, max=25401, avg=12550.41, stdev=2839.55 00:30:00.559 lat (usec): min=3279, max=25452, avg=12647.15, stdev=2853.30 00:30:00.559 clat percentiles (usec): 00:30:00.559 | 1.00th=[ 6259], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10552], 00:30:00.559 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12518], 00:30:00.559 | 70.00th=[12911], 80.00th=[13960], 90.00th=[16450], 95.00th=[18744], 00:30:00.559 | 99.00th=[22152], 99.50th=[22414], 99.90th=[25035], 99.95th=[25035], 00:30:00.559 | 99.99th=[25297] 00:30:00.559 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:30:00.559 slat (usec): min=2, max=8771, avg=95.71, stdev=565.28 00:30:00.559 clat (usec): min=6285, max=26107, avg=12488.21, stdev=2509.72 00:30:00.559 lat (usec): min=6317, max=26113, avg=12583.92, stdev=2519.94 00:30:00.559 clat percentiles (usec): 00:30:00.559 | 1.00th=[ 7767], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[10814], 00:30:00.559 | 30.00th=[11207], 40.00th=[11469], 50.00th=[12125], 60.00th=[12518], 00:30:00.559 | 70.00th=[12911], 80.00th=[14222], 90.00th=[15401], 95.00th=[16909], 00:30:00.559 | 99.00th=[22676], 99.50th=[22938], 99.90th=[24249], 99.95th=[24249], 00:30:00.559 | 99.99th=[26084] 00:30:00.559 bw ( KiB/s): min=19776, max=21184, per=32.23%, avg=20480.00, stdev=995.61, samples=2 00:30:00.559 iops : min= 4944, max= 5296, avg=5120.00, stdev=248.90, samples=2 00:30:00.559 lat (usec) : 750=0.01% 00:30:00.559 lat (msec) : 4=0.32%, 10=8.46%, 20=89.55%, 50=1.67% 00:30:00.559 cpu : usr=2.59%, sys=5.48%, ctx=401, majf=0, minf=1 00:30:00.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:30:00.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:00.559 issued rwts: total=5025,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.559 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:00.559 job2: (groupid=0, jobs=1): err= 0: pid=2551589: Tue Dec 10 04:17:54 2024 00:30:00.559 read: IOPS=3885, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1005msec) 00:30:00.559 slat (usec): min=2, max=46585, avg=130.34, stdev=990.69 00:30:00.559 clat (usec): min=460, max=70399, avg=16187.64, stdev=9447.37 00:30:00.559 lat (usec): min=3756, max=70403, avg=16317.98, stdev=9487.03 00:30:00.559 clat percentiles (usec): 00:30:00.559 | 1.00th=[ 7111], 5.00th=[10159], 10.00th=[11207], 20.00th=[12387], 00:30:00.559 | 30.00th=[13173], 40.00th=[13698], 50.00th=[14091], 60.00th=[14615], 00:30:00.559 | 70.00th=[15270], 80.00th=[16909], 90.00th=[20055], 95.00th=[23725], 00:30:00.559 | 99.00th=[66323], 99.50th=[67634], 99.90th=[70779], 99.95th=[70779], 00:30:00.559 | 99.99th=[70779] 00:30:00.559 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:30:00.559 slat (usec): min=3, max=6921, avg=115.01, stdev=558.33 00:30:00.559 clat (usec): min=5272, max=32232, avg=15399.43, stdev=4010.41 00:30:00.559 lat (usec): min=5911, max=32239, avg=15514.44, stdev=4019.99 00:30:00.559 clat percentiles (usec): 00:30:00.559 | 1.00th=[ 9503], 5.00th=[10945], 10.00th=[11600], 20.00th=[11863], 00:30:00.559 | 30.00th=[12518], 40.00th=[13304], 50.00th=[14484], 60.00th=[15270], 00:30:00.559 | 70.00th=[16450], 80.00th=[20055], 90.00th=[21627], 95.00th=[22676], 00:30:00.559 | 99.00th=[25822], 99.50th=[28181], 99.90th=[32113], 99.95th=[32113], 00:30:00.559 | 99.99th=[32113] 00:30:00.559 bw ( KiB/s): min=16384, max=16384, per=25.78%, avg=16384.00, stdev= 0.00, samples=2 00:30:00.559 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:30:00.559 lat (usec) : 500=0.01% 00:30:00.559 lat (msec) : 4=0.40%, 10=2.71%, 20=81.34%, 50=13.95%, 100=1.59% 00:30:00.559 cpu : usr=1.89%, sys=4.98%, ctx=428, majf=0, minf=1 00:30:00.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:00.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:00.559 issued rwts: total=3905,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.559 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:00.559 job3: (groupid=0, jobs=1): err= 0: pid=2551590: Tue Dec 10 04:17:54 2024 00:30:00.559 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:30:00.559 slat (usec): min=2, max=14705, avg=117.38, stdev=714.86 00:30:00.559 clat (usec): min=6847, max=50443, avg=15493.73, stdev=4996.69 00:30:00.559 lat (usec): min=6852, max=50450, avg=15611.11, stdev=5035.11 00:30:00.559 clat percentiles (usec): 00:30:00.559 | 1.00th=[10159], 5.00th=[10945], 10.00th=[11731], 20.00th=[12780], 00:30:00.559 | 30.00th=[13304], 40.00th=[13960], 50.00th=[14353], 60.00th=[14746], 00:30:00.559 | 70.00th=[15401], 80.00th=[16581], 90.00th=[19530], 95.00th=[23987], 00:30:00.559 | 99.00th=[40633], 99.50th=[45351], 99.90th=[50594], 99.95th=[50594], 00:30:00.559 | 99.99th=[50594] 00:30:00.559 write: IOPS=3731, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1011msec); 0 zone resets 00:30:00.559 slat (usec): min=3, max=7971, avg=138.72, stdev=584.00 00:30:00.559 clat (usec): min=4651, max=50451, avg=18990.36, stdev=8548.23 00:30:00.559 lat (usec): min=4659, max=50461, avg=19129.09, stdev=8594.63 00:30:00.559 clat percentiles (usec): 00:30:00.559 | 1.00th=[ 7177], 5.00th=[ 9896], 10.00th=[11207], 20.00th=[13566], 00:30:00.559 | 30.00th=[14353], 40.00th=[15139], 50.00th=[15533], 60.00th=[16450], 00:30:00.559 | 70.00th=[20055], 80.00th=[24511], 90.00th=[33817], 95.00th=[39584], 00:30:00.559 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[50594], 00:30:00.559 | 99.99th=[50594] 00:30:00.559 bw ( KiB/s): min=12784, max=16384, per=22.95%, avg=14584.00, stdev=2545.58, samples=2 00:30:00.559 iops : min= 3196, max= 4096, avg=3646.00, stdev=636.40, samples=2 00:30:00.559 lat (msec) : 10=3.18%, 20=76.28%, 50=20.44%, 100=0.10% 00:30:00.559 cpu : usr=3.07%, sys=4.55%, ctx=461, majf=0, minf=1 00:30:00.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:30:00.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:00.559 issued rwts: total=3584,3773,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.559 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:00.559 00:30:00.559 Run status group 0 (all jobs): 00:30:00.559 READ: bw=59.4MiB/s (62.3MB/s), 11.1MiB/s-19.5MiB/s (11.7MB/s-20.5MB/s), io=60.1MiB (63.0MB), run=1004-1011msec 00:30:00.559 WRITE: bw=62.1MiB/s (65.1MB/s), 12.0MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=62.7MiB (65.8MB), run=1004-1011msec 00:30:00.559 00:30:00.559 Disk stats (read/write): 00:30:00.559 nvme0n1: ios=2600/2587, merge=0/0, ticks=15929/14505, in_queue=30434, util=96.19% 00:30:00.559 nvme0n2: ios=4111/4320, merge=0/0, ticks=16439/17197, in_queue=33636, util=90.34% 00:30:00.559 nvme0n3: ios=3121/3439, merge=0/0, ticks=16506/16712, in_queue=33218, util=95.50% 00:30:00.559 nvme0n4: ios=3115/3246, merge=0/0, ticks=18873/18633, in_queue=37506, util=96.20% 00:30:00.559 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:30:00.559 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2551747 00:30:00.559 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:30:00.559 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:30:00.559 [global] 00:30:00.559 thread=1 00:30:00.559 invalidate=1 00:30:00.559 rw=read 00:30:00.559 time_based=1 00:30:00.559 runtime=10 00:30:00.559 ioengine=libaio 00:30:00.559 direct=1 00:30:00.559 bs=4096 00:30:00.559 iodepth=1 00:30:00.559 norandommap=1 00:30:00.559 numjobs=1 00:30:00.559 00:30:00.559 [job0] 00:30:00.559 filename=/dev/nvme0n1 00:30:00.559 [job1] 00:30:00.559 filename=/dev/nvme0n2 00:30:00.559 [job2] 00:30:00.559 filename=/dev/nvme0n3 00:30:00.559 [job3] 00:30:00.559 filename=/dev/nvme0n4 00:30:00.559 Could not set queue depth (nvme0n1) 00:30:00.559 Could not set queue depth (nvme0n2) 00:30:00.559 Could not set queue depth (nvme0n3) 00:30:00.559 Could not set queue depth (nvme0n4) 00:30:00.559 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:00.559 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:00.559 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:00.559 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:00.559 fio-3.35 00:30:00.559 Starting 4 threads 00:30:03.853 04:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:30:03.853 04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:30:03.853 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=4595712, buflen=4096 00:30:03.853 fio: pid=2551942, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:04.110 04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:04.111 04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:30:04.111 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4661248, buflen=4096 00:30:04.111 fio: pid=2551941, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:04.369 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=41287680, buflen=4096 00:30:04.369 fio: pid=2551939, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:04.369 04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:04.369 04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:30:04.628 04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:04.628 04:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:30:04.628 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=35303424, buflen=4096 00:30:04.628 fio: pid=2551940, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:04.628 00:30:04.628 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2551939: Tue Dec 10 04:17:58 2024 00:30:04.628 read: IOPS=2834, BW=11.1MiB/s (11.6MB/s)(39.4MiB/3556msec) 00:30:04.628 slat (usec): min=4, max=13887, avg=14.18, stdev=248.83 00:30:04.628 clat (usec): min=196, max=41286, avg=333.84, stdev=1853.00 00:30:04.628 lat (usec): min=201, max=41302, avg=348.02, stdev=1870.24 00:30:04.628 clat percentiles (usec): 00:30:04.628 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 227], 00:30:04.628 | 30.00th=[ 235], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:30:04.628 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:30:04.628 | 99.00th=[ 388], 99.50th=[ 478], 99.90th=[41157], 99.95th=[41157], 00:30:04.628 | 99.99th=[41157] 00:30:04.628 bw ( KiB/s): min= 1536, max=16352, per=49.97%, avg=10868.00, stdev=6883.03, samples=6 00:30:04.628 iops : min= 384, max= 4088, avg=2717.00, stdev=1720.76, samples=6 00:30:04.628 lat (usec) : 250=50.31%, 500=49.29%, 750=0.15% 00:30:04.628 lat (msec) : 2=0.02%, 10=0.01%, 50=0.21% 00:30:04.628 cpu : usr=1.35%, sys=3.85%, ctx=10088, majf=0, minf=1 00:30:04.628 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:04.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.628 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.628 issued rwts: total=10081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.628 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:04.628 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2551940: Tue Dec 10 04:17:58 2024 00:30:04.628 read: IOPS=2236, BW=8943KiB/s (9158kB/s)(33.7MiB/3855msec) 00:30:04.628 slat (usec): min=4, max=24178, avg=20.60, stdev=402.25 00:30:04.628 clat (usec): min=200, max=41184, avg=421.43, stdev=2322.42 00:30:04.628 lat (usec): min=205, max=56594, avg=440.91, stdev=2386.07 00:30:04.628 clat percentiles (usec): 00:30:04.628 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:30:04.628 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 273], 00:30:04.628 | 70.00th=[ 289], 80.00th=[ 326], 90.00th=[ 392], 95.00th=[ 457], 00:30:04.628 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[41157], 99.95th=[41157], 00:30:04.628 | 99.99th=[41157] 00:30:04.628 bw ( KiB/s): min= 104, max=15056, per=44.39%, avg=9654.71, stdev=5570.61, samples=7 00:30:04.628 iops : min= 26, max= 3764, avg=2413.57, stdev=1392.72, samples=7 00:30:04.628 lat (usec) : 250=40.63%, 500=56.59%, 750=2.39%, 1000=0.01% 00:30:04.628 lat (msec) : 4=0.01%, 10=0.02%, 20=0.01%, 50=0.32% 00:30:04.628 cpu : usr=0.96%, sys=2.93%, ctx=8632, majf=0, minf=1 00:30:04.628 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:04.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.628 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.628 issued rwts: total=8620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.628 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:04.628 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2551941: Tue Dec 10 04:17:58 2024 00:30:04.628 read: IOPS=348, BW=1393KiB/s (1427kB/s)(4552KiB/3267msec) 00:30:04.628 slat (nsec): min=4902, max=51145, avg=12173.35, stdev=6405.98 00:30:04.628 clat (usec): min=224, max=41125, avg=2834.76, stdev=9775.70 00:30:04.628 lat (usec): min=230, max=41149, avg=2846.94, stdev=9778.11 00:30:04.628 clat percentiles (usec): 00:30:04.628 | 1.00th=[ 243], 5.00th=[ 253], 10.00th=[ 262], 20.00th=[ 273], 00:30:04.628 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 326], 00:30:04.628 | 70.00th=[ 355], 80.00th=[ 388], 90.00th=[ 461], 95.00th=[41157], 00:30:04.628 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:04.628 | 99.99th=[41157] 00:30:04.628 bw ( KiB/s): min= 96, max= 2088, per=2.49%, avg=541.33, stdev=805.05, samples=6 00:30:04.628 iops : min= 24, max= 522, avg=135.33, stdev=201.26, samples=6 00:30:04.628 lat (usec) : 250=3.25%, 500=89.46%, 750=0.88% 00:30:04.628 lat (msec) : 10=0.09%, 20=0.09%, 50=6.15% 00:30:04.628 cpu : usr=0.12%, sys=0.49%, ctx=1140, majf=0, minf=2 00:30:04.628 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:04.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.628 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.628 issued rwts: total=1139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.628 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:04.628 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2551942: Tue Dec 10 04:17:58 2024 00:30:04.628 read: IOPS=377, BW=1508KiB/s (1544kB/s)(4488KiB/2977msec) 00:30:04.628 slat (nsec): min=5386, max=54677, avg=14648.21, stdev=6973.46 00:30:04.628 clat (usec): min=228, max=41378, avg=2615.01, stdev=9431.89 00:30:04.628 lat (usec): min=235, max=41394, avg=2629.64, stdev=9433.79 00:30:04.628 clat percentiles (usec): 00:30:04.628 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 262], 00:30:04.628 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 297], 00:30:04.628 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 400], 95.00th=[41157], 00:30:04.628 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:04.628 | 99.99th=[41157] 00:30:04.628 bw ( KiB/s): min= 104, max= 5520, per=8.17%, avg=1776.00, stdev=2283.49, samples=5 00:30:04.628 iops : min= 26, max= 1380, avg=444.00, stdev=570.87, samples=5 00:30:04.628 lat (usec) : 250=9.35%, 500=83.70%, 750=1.16% 00:30:04.628 lat (msec) : 50=5.70% 00:30:04.628 cpu : usr=0.27%, sys=0.57%, ctx=1123, majf=0, minf=1 00:30:04.628 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:04.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.628 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.628 issued rwts: total=1123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.628 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:04.628 00:30:04.628 Run status group 0 (all jobs): 00:30:04.628 READ: bw=21.2MiB/s (22.3MB/s), 1393KiB/s-11.1MiB/s (1427kB/s-11.6MB/s), io=81.9MiB (85.8MB), run=2977-3855msec 00:30:04.628 00:30:04.628 Disk stats (read/write): 00:30:04.628 nvme0n1: ios=9376/0, merge=0/0, ticks=3120/0, in_queue=3120, util=95.19% 00:30:04.628 nvme0n2: ios=8662/0, merge=0/0, ticks=3804/0, in_queue=3804, util=97.16% 00:30:04.628 nvme0n3: ios=735/0, merge=0/0, ticks=3427/0, in_queue=3427, util=99.10% 00:30:04.628 nvme0n4: ios=1119/0, merge=0/0, ticks=2795/0, in_queue=2795, util=96.74% 00:30:04.886 04:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:04.886 04:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:30:05.452 04:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:05.452 04:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:30:05.711 04:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:05.711 04:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:30:05.968 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:05.968 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:30:06.226 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:30:06.226 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2551747 00:30:06.226 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:30:06.226 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:06.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:06.484 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:06.484 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:30:06.484 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:06.484 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:06.484 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:06.484 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:06.484 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:30:06.484 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:30:06.484 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:30:06.484 nvmf hotplug test: fio failed as expected 00:30:06.484 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:06.742 rmmod nvme_tcp 00:30:06.742 rmmod nvme_fabrics 00:30:06.742 rmmod nvme_keyring 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2549837 ']' 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2549837 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2549837 ']' 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2549837 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:30:06.742 04:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:06.742 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2549837 00:30:06.742 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:06.742 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:06.743 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2549837' 00:30:06.743 killing process with pid 2549837 00:30:06.743 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2549837 00:30:06.743 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2549837 00:30:07.000 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:07.000 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:07.000 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:07.000 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:30:07.000 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:30:07.000 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:07.000 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:30:07.000 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:07.000 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:07.000 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.000 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.000 04:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.950 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:08.950 00:30:08.950 real 0m24.008s 00:30:08.950 user 1m8.017s 00:30:08.950 sys 0m9.948s 00:30:08.950 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:08.950 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:08.950 ************************************ 00:30:08.950 END TEST nvmf_fio_target 00:30:08.950 ************************************ 00:30:08.950 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:08.950 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:08.950 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:08.950 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:09.218 ************************************ 00:30:09.218 START TEST nvmf_bdevio 00:30:09.218 ************************************ 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:09.218 * Looking for test storage... 00:30:09.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:09.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.218 --rc genhtml_branch_coverage=1 00:30:09.218 --rc genhtml_function_coverage=1 00:30:09.218 --rc genhtml_legend=1 00:30:09.218 --rc geninfo_all_blocks=1 00:30:09.218 --rc geninfo_unexecuted_blocks=1 00:30:09.218 00:30:09.218 ' 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:09.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.218 --rc genhtml_branch_coverage=1 00:30:09.218 --rc genhtml_function_coverage=1 00:30:09.218 --rc genhtml_legend=1 00:30:09.218 --rc geninfo_all_blocks=1 00:30:09.218 --rc geninfo_unexecuted_blocks=1 00:30:09.218 00:30:09.218 ' 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:09.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.218 --rc genhtml_branch_coverage=1 00:30:09.218 --rc genhtml_function_coverage=1 00:30:09.218 --rc genhtml_legend=1 00:30:09.218 --rc geninfo_all_blocks=1 00:30:09.218 --rc geninfo_unexecuted_blocks=1 00:30:09.218 00:30:09.218 ' 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:09.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.218 --rc genhtml_branch_coverage=1 00:30:09.218 --rc genhtml_function_coverage=1 00:30:09.218 --rc genhtml_legend=1 00:30:09.218 --rc geninfo_all_blocks=1 00:30:09.218 --rc geninfo_unexecuted_blocks=1 00:30:09.218 00:30:09.218 ' 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.218 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:30:09.219 04:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.752 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:11.753 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:11.753 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:11.753 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:11.753 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:11.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:30:11.753 00:30:11.753 --- 10.0.0.2 ping statistics --- 00:30:11.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.753 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:30:11.753 00:30:11.753 --- 10.0.0.1 ping statistics --- 00:30:11.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.753 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2554694 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2554694 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2554694 ']' 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.753 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.754 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.754 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:11.754 [2024-12-10 04:18:05.905858] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:11.754 [2024-12-10 04:18:05.906993] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:11.754 [2024-12-10 04:18:05.907053] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.754 [2024-12-10 04:18:05.978860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:11.754 [2024-12-10 04:18:06.035854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.754 [2024-12-10 04:18:06.035914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.754 [2024-12-10 04:18:06.035943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.754 [2024-12-10 04:18:06.035954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.754 [2024-12-10 04:18:06.035964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.754 [2024-12-10 04:18:06.037643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:11.754 [2024-12-10 04:18:06.037694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:11.754 [2024-12-10 04:18:06.037741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:11.754 [2024-12-10 04:18:06.037745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:11.754 [2024-12-10 04:18:06.130764] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:11.754 [2024-12-10 04:18:06.131015] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:11.754 [2024-12-10 04:18:06.131302] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:11.754 [2024-12-10 04:18:06.131978] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:11.754 [2024-12-10 04:18:06.132195] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:12.012 [2024-12-10 04:18:06.186497] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:12.012 Malloc0 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:12.012 [2024-12-10 04:18:06.254705] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:12.012 { 00:30:12.012 "params": { 00:30:12.012 "name": "Nvme$subsystem", 00:30:12.012 "trtype": "$TEST_TRANSPORT", 00:30:12.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.012 "adrfam": "ipv4", 00:30:12.012 "trsvcid": "$NVMF_PORT", 00:30:12.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.012 "hdgst": ${hdgst:-false}, 00:30:12.012 "ddgst": ${ddgst:-false} 00:30:12.012 }, 00:30:12.012 "method": "bdev_nvme_attach_controller" 00:30:12.012 } 00:30:12.012 EOF 00:30:12.012 )") 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:30:12.012 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:12.012 "params": { 00:30:12.012 "name": "Nvme1", 00:30:12.012 "trtype": "tcp", 00:30:12.012 "traddr": "10.0.0.2", 00:30:12.012 "adrfam": "ipv4", 00:30:12.012 "trsvcid": "4420", 00:30:12.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:12.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:12.012 "hdgst": false, 00:30:12.012 "ddgst": false 00:30:12.012 }, 00:30:12.012 "method": "bdev_nvme_attach_controller" 00:30:12.012 }' 00:30:12.012 [2024-12-10 04:18:06.306947] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:12.012 [2024-12-10 04:18:06.307019] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2554834 ] 00:30:12.012 [2024-12-10 04:18:06.377384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:12.270 [2024-12-10 04:18:06.440613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.270 [2024-12-10 04:18:06.440665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:12.270 [2024-12-10 04:18:06.440669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.528 I/O targets: 00:30:12.528 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:30:12.528 00:30:12.528 00:30:12.528 CUnit - A unit testing framework for C - Version 2.1-3 00:30:12.528 http://cunit.sourceforge.net/ 00:30:12.528 00:30:12.528 00:30:12.528 Suite: bdevio tests on: Nvme1n1 00:30:12.528 Test: blockdev write read block ...passed 00:30:12.528 Test: blockdev write zeroes read block ...passed 00:30:12.528 Test: blockdev write zeroes read no split ...passed 00:30:12.528 Test: blockdev write zeroes read split ...passed 00:30:12.528 Test: blockdev write zeroes read split partial ...passed 00:30:12.528 Test: blockdev reset ...[2024-12-10 04:18:06.842539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:12.528 [2024-12-10 04:18:06.842648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171e8c0 (9): Bad file descriptor 00:30:12.528 [2024-12-10 04:18:06.894653] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:30:12.528 passed 00:30:12.528 Test: blockdev write read 8 blocks ...passed 00:30:12.528 Test: blockdev write read size > 128k ...passed 00:30:12.528 Test: blockdev write read invalid size ...passed 00:30:12.786 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:12.786 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:12.786 Test: blockdev write read max offset ...passed 00:30:12.786 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:12.786 Test: blockdev writev readv 8 blocks ...passed 00:30:12.786 Test: blockdev writev readv 30 x 1block ...passed 00:30:12.786 Test: blockdev writev readv block ...passed 00:30:12.786 Test: blockdev writev readv size > 128k ...passed 00:30:12.786 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:12.786 Test: blockdev comparev and writev ...[2024-12-10 04:18:07.109883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:12.786 [2024-12-10 04:18:07.109920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:12.786 [2024-12-10 04:18:07.109946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:12.786 [2024-12-10 04:18:07.109964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.786 [2024-12-10 04:18:07.110336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:12.786 [2024-12-10 04:18:07.110362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:12.786 [2024-12-10 04:18:07.110384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:12.786 [2024-12-10 04:18:07.110401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:12.786 [2024-12-10 04:18:07.110783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:12.786 [2024-12-10 04:18:07.110808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:12.786 [2024-12-10 04:18:07.110829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:12.786 [2024-12-10 04:18:07.110852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:12.786 [2024-12-10 04:18:07.111202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:12.786 [2024-12-10 04:18:07.111239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:12.786 [2024-12-10 04:18:07.111261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:12.786 [2024-12-10 04:18:07.111277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:12.786 passed 00:30:13.044 Test: blockdev nvme passthru rw ...passed 00:30:13.044 Test: blockdev nvme passthru vendor specific ...[2024-12-10 04:18:07.192848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:13.044 [2024-12-10 04:18:07.192877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:13.044 [2024-12-10 04:18:07.193035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:13.044 [2024-12-10 04:18:07.193059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:13.045 [2024-12-10 04:18:07.193204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:13.045 [2024-12-10 04:18:07.193227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:13.045 [2024-12-10 04:18:07.193369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:13.045 [2024-12-10 04:18:07.193392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:13.045 passed 00:30:13.045 Test: blockdev nvme admin passthru ...passed 00:30:13.045 Test: blockdev copy ...passed 00:30:13.045 00:30:13.045 Run Summary: Type Total Ran Passed Failed Inactive 00:30:13.045 suites 1 1 n/a 0 0 00:30:13.045 tests 23 23 23 0 0 00:30:13.045 asserts 152 152 152 0 n/a 00:30:13.045 00:30:13.045 Elapsed time = 1.024 seconds 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:13.303 rmmod nvme_tcp 00:30:13.303 rmmod nvme_fabrics 00:30:13.303 rmmod nvme_keyring 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2554694 ']' 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2554694 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2554694 ']' 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2554694 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2554694 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2554694' 00:30:13.303 killing process with pid 2554694 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2554694 00:30:13.303 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2554694 00:30:13.562 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:13.562 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:13.562 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:13.562 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:30:13.562 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:30:13.562 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:13.562 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:30:13.562 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:13.562 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:13.562 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.562 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.562 04:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.461 04:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:15.461 00:30:15.461 real 0m6.486s 00:30:15.461 user 0m8.454s 00:30:15.461 sys 0m2.595s 00:30:15.461 04:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:15.461 04:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:15.461 ************************************ 00:30:15.461 END TEST nvmf_bdevio 00:30:15.461 ************************************ 00:30:15.720 04:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:15.720 00:30:15.720 real 3m55.187s 00:30:15.720 user 8m56.401s 00:30:15.720 sys 1m23.811s 00:30:15.720 04:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:15.720 04:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:15.720 ************************************ 00:30:15.720 END TEST nvmf_target_core_interrupt_mode 00:30:15.720 ************************************ 00:30:15.720 04:18:09 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:15.720 04:18:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:15.720 04:18:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:15.720 04:18:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:15.720 ************************************ 00:30:15.720 START TEST nvmf_interrupt 00:30:15.720 ************************************ 00:30:15.720 04:18:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:15.720 * Looking for test storage... 00:30:15.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:15.720 04:18:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:15.720 04:18:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:30:15.720 04:18:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:15.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.720 --rc genhtml_branch_coverage=1 00:30:15.720 --rc genhtml_function_coverage=1 00:30:15.720 --rc genhtml_legend=1 00:30:15.720 --rc geninfo_all_blocks=1 00:30:15.720 --rc geninfo_unexecuted_blocks=1 00:30:15.720 00:30:15.720 ' 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:15.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.720 --rc genhtml_branch_coverage=1 00:30:15.720 --rc genhtml_function_coverage=1 00:30:15.720 --rc genhtml_legend=1 00:30:15.720 --rc geninfo_all_blocks=1 00:30:15.720 --rc geninfo_unexecuted_blocks=1 00:30:15.720 00:30:15.720 ' 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:15.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.720 --rc genhtml_branch_coverage=1 00:30:15.720 --rc genhtml_function_coverage=1 00:30:15.720 --rc genhtml_legend=1 00:30:15.720 --rc geninfo_all_blocks=1 00:30:15.720 --rc geninfo_unexecuted_blocks=1 00:30:15.720 00:30:15.720 ' 00:30:15.720 04:18:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:15.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.720 --rc genhtml_branch_coverage=1 00:30:15.720 --rc genhtml_function_coverage=1 00:30:15.720 --rc genhtml_legend=1 00:30:15.720 --rc geninfo_all_blocks=1 00:30:15.721 --rc geninfo_unexecuted_blocks=1 00:30:15.721 00:30:15.721 ' 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:30:15.721 04:18:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:18.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:18.255 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:18.255 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:18.255 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.255 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:18.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:30:18.256 00:30:18.256 --- 10.0.0.2 ping statistics --- 00:30:18.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.256 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:30:18.256 00:30:18.256 --- 10.0.0.1 ping statistics --- 00:30:18.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.256 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2557427 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2557427 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2557427 ']' 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:18.256 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:18.256 [2024-12-10 04:18:12.463534] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:18.256 [2024-12-10 04:18:12.464634] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:18.256 [2024-12-10 04:18:12.464691] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.256 [2024-12-10 04:18:12.537763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:18.256 [2024-12-10 04:18:12.594623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.256 [2024-12-10 04:18:12.594679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.256 [2024-12-10 04:18:12.594709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.256 [2024-12-10 04:18:12.594720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.256 [2024-12-10 04:18:12.594731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.256 [2024-12-10 04:18:12.596112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.256 [2024-12-10 04:18:12.596118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.514 [2024-12-10 04:18:12.684437] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:18.514 [2024-12-10 04:18:12.684458] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:18.514 [2024-12-10 04:18:12.684730] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:18.514 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:18.514 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:30:18.514 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:18.514 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:18.514 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:18.514 04:18:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.514 04:18:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:30:18.514 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:30:18.514 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:30:18.515 5000+0 records in 00:30:18.515 5000+0 records out 00:30:18.515 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0136491 s, 750 MB/s 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:18.515 AIO0 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:18.515 [2024-12-10 04:18:12.784801] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:18.515 [2024-12-10 04:18:12.812995] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2557427 0 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2557427 0 idle 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2557427 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2557427 -w 256 00:30:18.515 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:18.773 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2557427 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.27 reactor_0' 00:30:18.773 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2557427 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.27 reactor_0 00:30:18.773 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:18.773 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:18.773 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:18.773 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:18.773 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:18.773 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:18.773 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:18.773 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:18.774 04:18:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:18.774 04:18:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2557427 1 00:30:18.774 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2557427 1 idle 00:30:18.774 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2557427 00:30:18.774 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:18.774 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:18.774 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:18.774 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:18.774 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:18.774 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:18.774 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:18.774 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:18.774 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:18.774 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2557427 -w 256 00:30:18.774 04:18:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2557434 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2557434 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2557595 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2557427 0 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2557427 0 busy 00:30:18.774 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2557427 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2557427 -w 256 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2557427 root 20 0 128.2g 48384 34944 R 33.3 0.1 0:00.32 reactor_0' 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2557427 root 20 0 128.2g 48384 34944 R 33.3 0.1 0:00.32 reactor_0 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=33.3 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=33 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2557427 1 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2557427 1 busy 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2557427 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2557427 -w 256 00:30:19.033 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:19.292 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2557434 root 20 0 128.2g 48384 34944 R 86.7 0.1 0:00.16 reactor_1' 00:30:19.292 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2557434 root 20 0 128.2g 48384 34944 R 86.7 0.1 0:00.16 reactor_1 00:30:19.292 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:19.292 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:19.292 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=86.7 00:30:19.292 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=86 00:30:19.292 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:19.292 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:19.292 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:19.292 04:18:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:19.292 04:18:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2557595 00:30:29.272 Initializing NVMe Controllers 00:30:29.272 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:29.272 Controller IO queue size 256, less than required. 00:30:29.272 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:29.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:29.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:29.272 Initialization complete. Launching workers. 00:30:29.272 ======================================================== 00:30:29.272 Latency(us) 00:30:29.272 Device Information : IOPS MiB/s Average min max 00:30:29.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13795.97 53.89 18569.34 4388.96 22932.04 00:30:29.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 12629.99 49.34 20285.89 4527.22 61093.31 00:30:29.272 ======================================================== 00:30:29.272 Total : 26425.96 103.23 19389.75 4388.96 61093.31 00:30:29.272 00:30:29.272 04:18:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:29.272 04:18:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2557427 0 00:30:29.272 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2557427 0 idle 00:30:29.272 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2557427 00:30:29.272 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:29.272 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:29.272 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:29.272 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:29.272 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:29.272 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:29.272 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:29.272 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:29.272 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:29.272 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2557427 -w 256 00:30:29.272 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:29.272 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2557427 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:19.49 reactor_0' 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2557427 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:19.49 reactor_0 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2557427 1 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2557427 1 idle 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2557427 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2557427 -w 256 00:30:29.273 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:29.531 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2557434 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.24 reactor_1' 00:30:29.531 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2557434 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.24 reactor_1 00:30:29.531 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:29.531 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:29.531 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:29.531 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:29.531 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:29.531 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:29.531 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:29.531 04:18:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:29.531 04:18:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:29.789 04:18:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:30:29.789 04:18:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:30:29.789 04:18:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:29.789 04:18:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:29.789 04:18:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2557427 0 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2557427 0 idle 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2557427 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2557427 -w 256 00:30:31.694 04:18:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:31.952 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2557427 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:19.57 reactor_0' 00:30:31.952 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2557427 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:19.57 reactor_0 00:30:31.952 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:31.952 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2557427 1 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2557427 1 idle 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2557427 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2557427 -w 256 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2557434 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:09.27 reactor_1' 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2557434 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:09.27 reactor_1 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:31.953 04:18:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:32.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:32.212 04:18:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:32.212 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:30:32.212 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:32.212 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:32.212 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:32.212 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:32.212 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:30:32.212 04:18:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:30:32.212 04:18:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:30:32.212 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:32.213 rmmod nvme_tcp 00:30:32.213 rmmod nvme_fabrics 00:30:32.213 rmmod nvme_keyring 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2557427 ']' 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2557427 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2557427 ']' 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2557427 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2557427 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2557427' 00:30:32.213 killing process with pid 2557427 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2557427 00:30:32.213 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2557427 00:30:32.472 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:32.472 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:32.472 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:32.472 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:30:32.472 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:30:32.473 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:32.473 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:30:32.473 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:32.473 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:32.473 04:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.473 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:32.473 04:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.014 04:18:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:35.014 00:30:35.014 real 0m18.975s 00:30:35.014 user 0m36.362s 00:30:35.014 sys 0m6.940s 00:30:35.014 04:18:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:35.014 04:18:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:35.014 ************************************ 00:30:35.014 END TEST nvmf_interrupt 00:30:35.014 ************************************ 00:30:35.014 00:30:35.014 real 25m2.255s 00:30:35.014 user 58m38.074s 00:30:35.014 sys 6m37.058s 00:30:35.014 04:18:28 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:35.014 04:18:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:35.014 ************************************ 00:30:35.014 END TEST nvmf_tcp 00:30:35.014 ************************************ 00:30:35.014 04:18:28 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:30:35.014 04:18:28 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:35.014 04:18:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:35.014 04:18:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:35.014 04:18:28 -- common/autotest_common.sh@10 -- # set +x 00:30:35.014 ************************************ 00:30:35.014 START TEST spdkcli_nvmf_tcp 00:30:35.014 ************************************ 00:30:35.014 04:18:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:35.014 * Looking for test storage... 00:30:35.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:35.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.014 --rc genhtml_branch_coverage=1 00:30:35.014 --rc genhtml_function_coverage=1 00:30:35.014 --rc genhtml_legend=1 00:30:35.014 --rc geninfo_all_blocks=1 00:30:35.014 --rc geninfo_unexecuted_blocks=1 00:30:35.014 00:30:35.014 ' 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:35.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.014 --rc genhtml_branch_coverage=1 00:30:35.014 --rc genhtml_function_coverage=1 00:30:35.014 --rc genhtml_legend=1 00:30:35.014 --rc geninfo_all_blocks=1 00:30:35.014 --rc geninfo_unexecuted_blocks=1 00:30:35.014 00:30:35.014 ' 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:35.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.014 --rc genhtml_branch_coverage=1 00:30:35.014 --rc genhtml_function_coverage=1 00:30:35.014 --rc genhtml_legend=1 00:30:35.014 --rc geninfo_all_blocks=1 00:30:35.014 --rc geninfo_unexecuted_blocks=1 00:30:35.014 00:30:35.014 ' 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:35.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.014 --rc genhtml_branch_coverage=1 00:30:35.014 --rc genhtml_function_coverage=1 00:30:35.014 --rc genhtml_legend=1 00:30:35.014 --rc geninfo_all_blocks=1 00:30:35.014 --rc geninfo_unexecuted_blocks=1 00:30:35.014 00:30:35.014 ' 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.014 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:35.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2559590 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2559590 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2559590 ']' 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.015 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:35.015 [2024-12-10 04:18:29.158684] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:35.015 [2024-12-10 04:18:29.158785] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2559590 ] 00:30:35.015 [2024-12-10 04:18:29.223933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:35.015 [2024-12-10 04:18:29.280947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.015 [2024-12-10 04:18:29.280951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.273 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.273 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:30:35.273 04:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:35.273 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:35.273 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:35.273 04:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:35.273 04:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:35.273 04:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:35.273 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:35.273 04:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:35.273 04:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:35.273 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:35.273 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:35.273 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:35.273 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:35.273 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:35.274 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:35.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:35.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:35.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:35.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:35.274 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:35.274 ' 00:30:37.816 [2024-12-10 04:18:32.092549] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.194 [2024-12-10 04:18:33.360998] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:41.731 [2024-12-10 04:18:35.704217] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:43.636 [2024-12-10 04:18:37.714266] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:45.011 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:45.011 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:45.011 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:45.011 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:45.011 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:45.011 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:45.011 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:45.011 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:45.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:45.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:45.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:45.011 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:45.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:45.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:45.011 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:45.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:45.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:45.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:45.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:45.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:45.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:45.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:45.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:45.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:45.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:45.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:45.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:45.012 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:45.012 04:18:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:45.012 04:18:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:45.012 04:18:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:45.012 04:18:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:45.012 04:18:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.012 04:18:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:45.271 04:18:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:45.271 04:18:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:45.530 04:18:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:45.530 04:18:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:45.530 04:18:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:45.530 04:18:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:45.530 04:18:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:45.789 04:18:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:45.789 04:18:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.789 04:18:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:45.789 04:18:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:45.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:45.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:45.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:45.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:45.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:45.789 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:45.789 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:45.789 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:45.789 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:45.789 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:45.789 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:45.789 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:45.789 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:45.789 ' 00:30:51.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:51.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:51.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:51.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:51.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:51.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:51.057 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:51.057 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:51.057 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:51.057 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:51.057 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:51.057 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:51.057 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:51.057 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:51.057 04:18:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:51.057 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:51.057 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:51.057 04:18:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2559590 00:30:51.057 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2559590 ']' 00:30:51.057 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2559590 00:30:51.057 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:30:51.057 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:51.057 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2559590 00:30:51.057 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:51.057 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:51.057 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2559590' 00:30:51.057 killing process with pid 2559590 00:30:51.057 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2559590 00:30:51.057 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2559590 00:30:51.315 04:18:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:51.315 04:18:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:51.315 04:18:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2559590 ']' 00:30:51.315 04:18:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2559590 00:30:51.315 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2559590 ']' 00:30:51.315 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2559590 00:30:51.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2559590) - No such process 00:30:51.315 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2559590 is not found' 00:30:51.315 Process with pid 2559590 is not found 00:30:51.315 04:18:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:51.315 04:18:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:51.315 04:18:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:51.315 00:30:51.315 real 0m16.642s 00:30:51.315 user 0m35.530s 00:30:51.315 sys 0m0.766s 00:30:51.315 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:51.315 04:18:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:51.315 ************************************ 00:30:51.315 END TEST spdkcli_nvmf_tcp 00:30:51.315 ************************************ 00:30:51.315 04:18:45 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:51.315 04:18:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:51.315 04:18:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:51.315 04:18:45 -- common/autotest_common.sh@10 -- # set +x 00:30:51.315 ************************************ 00:30:51.315 START TEST nvmf_identify_passthru 00:30:51.315 ************************************ 00:30:51.315 04:18:45 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:51.574 * Looking for test storage... 00:30:51.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:51.574 04:18:45 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:51.574 04:18:45 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:30:51.574 04:18:45 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:51.574 04:18:45 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:51.574 04:18:45 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:51.574 04:18:45 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:51.574 04:18:45 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:51.574 04:18:45 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:30:51.574 04:18:45 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:30:51.574 04:18:45 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:30:51.574 04:18:45 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:30:51.574 04:18:45 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:30:51.574 04:18:45 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:30:51.574 04:18:45 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:30:51.574 04:18:45 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:51.574 04:18:45 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:30:51.574 04:18:45 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:30:51.574 04:18:45 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:51.574 04:18:45 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:51.574 04:18:45 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:30:51.575 04:18:45 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:51.575 04:18:45 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:51.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.575 --rc genhtml_branch_coverage=1 00:30:51.575 --rc genhtml_function_coverage=1 00:30:51.575 --rc genhtml_legend=1 00:30:51.575 --rc geninfo_all_blocks=1 00:30:51.575 --rc geninfo_unexecuted_blocks=1 00:30:51.575 00:30:51.575 ' 00:30:51.575 04:18:45 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:51.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.575 --rc genhtml_branch_coverage=1 00:30:51.575 --rc genhtml_function_coverage=1 00:30:51.575 --rc genhtml_legend=1 00:30:51.575 --rc geninfo_all_blocks=1 00:30:51.575 --rc geninfo_unexecuted_blocks=1 00:30:51.575 00:30:51.575 ' 00:30:51.575 04:18:45 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:51.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.575 --rc genhtml_branch_coverage=1 00:30:51.575 --rc genhtml_function_coverage=1 00:30:51.575 --rc genhtml_legend=1 00:30:51.575 --rc geninfo_all_blocks=1 00:30:51.575 --rc geninfo_unexecuted_blocks=1 00:30:51.575 00:30:51.575 ' 00:30:51.575 04:18:45 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:51.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.575 --rc genhtml_branch_coverage=1 00:30:51.575 --rc genhtml_function_coverage=1 00:30:51.575 --rc genhtml_legend=1 00:30:51.575 --rc geninfo_all_blocks=1 00:30:51.575 --rc geninfo_unexecuted_blocks=1 00:30:51.575 00:30:51.575 ' 00:30:51.575 04:18:45 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.575 04:18:45 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.575 04:18:45 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.575 04:18:45 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.575 04:18:45 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:51.575 04:18:45 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:51.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:51.575 04:18:45 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.575 04:18:45 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.575 04:18:45 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.575 04:18:45 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.575 04:18:45 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.575 04:18:45 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:51.575 04:18:45 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.575 04:18:45 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.575 04:18:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:51.575 04:18:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:51.575 04:18:45 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:30:51.575 04:18:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:54.113 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:54.113 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:30:54.113 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:54.113 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:54.113 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:54.113 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:54.113 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:54.113 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:30:54.113 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:54.114 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:54.114 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:54.114 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:54.114 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:54.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:54.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:30:54.114 00:30:54.114 --- 10.0.0.2 ping statistics --- 00:30:54.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.114 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:54.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:54.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:30:54.114 00:30:54.114 --- 10.0.0.1 ping statistics --- 00:30:54.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.114 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:54.114 04:18:48 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:54.114 04:18:48 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:54.114 04:18:48 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:54.114 04:18:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:54.114 04:18:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:54.114 04:18:48 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:54.114 04:18:48 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:30:54.114 04:18:48 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:30:54.114 04:18:48 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:30:54.114 04:18:48 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:54.114 04:18:48 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:30:54.114 04:18:48 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:54.114 04:18:48 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:54.114 04:18:48 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:54.114 04:18:48 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:54.114 04:18:48 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:30:54.114 04:18:48 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:30:54.114 04:18:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:30:54.114 04:18:48 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:30:54.115 04:18:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:54.115 04:18:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:54.115 04:18:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:58.365 04:18:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:30:58.365 04:18:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:58.365 04:18:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:58.365 04:18:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:02.562 04:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:31:02.562 04:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:02.562 04:18:56 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:02.562 04:18:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.562 04:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:02.562 04:18:56 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:02.562 04:18:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.562 04:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2564245 00:31:02.562 04:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:02.562 04:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:02.562 04:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2564245 00:31:02.562 04:18:56 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2564245 ']' 00:31:02.562 04:18:56 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.562 04:18:56 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:02.562 04:18:56 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.562 04:18:56 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:02.562 04:18:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.823 [2024-12-10 04:18:56.990387] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:31:02.823 [2024-12-10 04:18:56.990492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.823 [2024-12-10 04:18:57.066761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:02.823 [2024-12-10 04:18:57.128483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.823 [2024-12-10 04:18:57.128570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.823 [2024-12-10 04:18:57.128588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:02.823 [2024-12-10 04:18:57.128599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:02.823 [2024-12-10 04:18:57.128609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.823 [2024-12-10 04:18:57.130200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.823 [2024-12-10 04:18:57.130262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:02.823 [2024-12-10 04:18:57.130329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:02.823 [2024-12-10 04:18:57.130332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.081 04:18:57 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:03.081 04:18:57 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:31:03.081 04:18:57 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:03.081 04:18:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.081 04:18:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:03.081 INFO: Log level set to 20 00:31:03.081 INFO: Requests: 00:31:03.081 { 00:31:03.081 "jsonrpc": "2.0", 00:31:03.081 "method": "nvmf_set_config", 00:31:03.081 "id": 1, 00:31:03.081 "params": { 00:31:03.081 "admin_cmd_passthru": { 00:31:03.081 "identify_ctrlr": true 00:31:03.081 } 00:31:03.081 } 00:31:03.081 } 00:31:03.081 00:31:03.081 INFO: response: 00:31:03.081 { 00:31:03.081 "jsonrpc": "2.0", 00:31:03.081 "id": 1, 00:31:03.081 "result": true 00:31:03.081 } 00:31:03.081 00:31:03.081 04:18:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.081 04:18:57 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:03.081 04:18:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.081 04:18:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:03.081 INFO: Setting log level to 20 00:31:03.081 INFO: Setting log level to 20 00:31:03.081 INFO: Log level set to 20 00:31:03.081 INFO: Log level set to 20 00:31:03.081 INFO: Requests: 00:31:03.081 { 00:31:03.081 "jsonrpc": "2.0", 00:31:03.081 "method": "framework_start_init", 00:31:03.081 "id": 1 00:31:03.081 } 00:31:03.081 00:31:03.081 INFO: Requests: 00:31:03.081 { 00:31:03.081 "jsonrpc": "2.0", 00:31:03.081 "method": "framework_start_init", 00:31:03.081 "id": 1 00:31:03.081 } 00:31:03.081 00:31:03.081 [2024-12-10 04:18:57.325286] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:03.081 INFO: response: 00:31:03.081 { 00:31:03.081 "jsonrpc": "2.0", 00:31:03.081 "id": 1, 00:31:03.081 "result": true 00:31:03.081 } 00:31:03.081 00:31:03.081 INFO: response: 00:31:03.081 { 00:31:03.081 "jsonrpc": "2.0", 00:31:03.081 "id": 1, 00:31:03.081 "result": true 00:31:03.081 } 00:31:03.081 00:31:03.081 04:18:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.081 04:18:57 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:03.081 04:18:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.081 04:18:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:03.081 INFO: Setting log level to 40 00:31:03.081 INFO: Setting log level to 40 00:31:03.081 INFO: Setting log level to 40 00:31:03.081 [2024-12-10 04:18:57.335275] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:03.081 04:18:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.081 04:18:57 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:03.081 04:18:57 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:03.081 04:18:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:03.081 04:18:57 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:31:03.081 04:18:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.081 04:18:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:06.368 Nvme0n1 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:06.368 [2024-12-10 04:19:00.236826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:06.368 [ 00:31:06.368 { 00:31:06.368 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:06.368 "subtype": "Discovery", 00:31:06.368 "listen_addresses": [], 00:31:06.368 "allow_any_host": true, 00:31:06.368 "hosts": [] 00:31:06.368 }, 00:31:06.368 { 00:31:06.368 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:06.368 "subtype": "NVMe", 00:31:06.368 "listen_addresses": [ 00:31:06.368 { 00:31:06.368 "trtype": "TCP", 00:31:06.368 "adrfam": "IPv4", 00:31:06.368 "traddr": "10.0.0.2", 00:31:06.368 "trsvcid": "4420" 00:31:06.368 } 00:31:06.368 ], 00:31:06.368 "allow_any_host": true, 00:31:06.368 "hosts": [], 00:31:06.368 "serial_number": "SPDK00000000000001", 00:31:06.368 "model_number": "SPDK bdev Controller", 00:31:06.368 "max_namespaces": 1, 00:31:06.368 "min_cntlid": 1, 00:31:06.368 "max_cntlid": 65519, 00:31:06.368 "namespaces": [ 00:31:06.368 { 00:31:06.368 "nsid": 1, 00:31:06.368 "bdev_name": "Nvme0n1", 00:31:06.368 "name": "Nvme0n1", 00:31:06.368 "nguid": "73C0ED609E15467A88BFFC77854006C6", 00:31:06.368 "uuid": "73c0ed60-9e15-467a-88bf-fc77854006c6" 00:31:06.368 } 00:31:06.368 ] 00:31:06.368 } 00:31:06.368 ] 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:06.368 04:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:06.368 04:19:00 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:06.368 04:19:00 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:31:06.368 04:19:00 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:06.368 04:19:00 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:31:06.368 04:19:00 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:06.368 04:19:00 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:06.368 rmmod nvme_tcp 00:31:06.368 rmmod nvme_fabrics 00:31:06.368 rmmod nvme_keyring 00:31:06.368 04:19:00 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:06.368 04:19:00 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:31:06.368 04:19:00 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:31:06.368 04:19:00 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2564245 ']' 00:31:06.368 04:19:00 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2564245 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2564245 ']' 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2564245 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2564245 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2564245' 00:31:06.368 killing process with pid 2564245 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2564245 00:31:06.368 04:19:00 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2564245 00:31:08.272 04:19:02 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:08.272 04:19:02 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:08.272 04:19:02 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:08.272 04:19:02 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:31:08.272 04:19:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:31:08.272 04:19:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:08.272 04:19:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:31:08.272 04:19:02 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:08.272 04:19:02 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:08.272 04:19:02 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.272 04:19:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:08.272 04:19:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.181 04:19:04 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:10.181 00:31:10.181 real 0m18.628s 00:31:10.181 user 0m26.433s 00:31:10.181 sys 0m3.330s 00:31:10.181 04:19:04 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:10.181 04:19:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:10.181 ************************************ 00:31:10.181 END TEST nvmf_identify_passthru 00:31:10.181 ************************************ 00:31:10.181 04:19:04 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:10.181 04:19:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:10.181 04:19:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:10.181 04:19:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.181 ************************************ 00:31:10.181 START TEST nvmf_dif 00:31:10.181 ************************************ 00:31:10.181 04:19:04 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:10.181 * Looking for test storage... 00:31:10.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:10.181 04:19:04 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:10.181 04:19:04 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:31:10.181 04:19:04 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:10.181 04:19:04 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:31:10.181 04:19:04 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:10.181 04:19:04 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:10.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.181 --rc genhtml_branch_coverage=1 00:31:10.181 --rc genhtml_function_coverage=1 00:31:10.181 --rc genhtml_legend=1 00:31:10.181 --rc geninfo_all_blocks=1 00:31:10.181 --rc geninfo_unexecuted_blocks=1 00:31:10.181 00:31:10.181 ' 00:31:10.181 04:19:04 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:10.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.181 --rc genhtml_branch_coverage=1 00:31:10.181 --rc genhtml_function_coverage=1 00:31:10.181 --rc genhtml_legend=1 00:31:10.181 --rc geninfo_all_blocks=1 00:31:10.181 --rc geninfo_unexecuted_blocks=1 00:31:10.181 00:31:10.181 ' 00:31:10.181 04:19:04 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:10.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.181 --rc genhtml_branch_coverage=1 00:31:10.181 --rc genhtml_function_coverage=1 00:31:10.181 --rc genhtml_legend=1 00:31:10.181 --rc geninfo_all_blocks=1 00:31:10.181 --rc geninfo_unexecuted_blocks=1 00:31:10.181 00:31:10.181 ' 00:31:10.181 04:19:04 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:10.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.181 --rc genhtml_branch_coverage=1 00:31:10.181 --rc genhtml_function_coverage=1 00:31:10.181 --rc genhtml_legend=1 00:31:10.181 --rc geninfo_all_blocks=1 00:31:10.181 --rc geninfo_unexecuted_blocks=1 00:31:10.181 00:31:10.181 ' 00:31:10.181 04:19:04 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:10.181 04:19:04 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.181 04:19:04 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.181 04:19:04 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.181 04:19:04 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.181 04:19:04 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.182 04:19:04 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:10.182 04:19:04 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:10.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:10.182 04:19:04 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:10.182 04:19:04 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:10.182 04:19:04 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:10.182 04:19:04 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:10.182 04:19:04 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.182 04:19:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:10.182 04:19:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:10.182 04:19:04 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:31:10.182 04:19:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:12.718 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:12.718 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:12.718 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:12.718 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:12.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:12.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:31:12.718 00:31:12.718 --- 10.0.0.2 ping statistics --- 00:31:12.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.718 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:12.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:12.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:31:12.718 00:31:12.718 --- 10.0.0.1 ping statistics --- 00:31:12.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.718 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:31:12.718 04:19:06 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:13.656 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:13.656 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:13.656 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:13.656 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:13.656 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:13.656 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:13.656 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:13.656 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:13.656 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:13.656 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:13.656 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:13.656 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:13.656 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:13.656 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:13.656 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:13.656 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:13.656 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:13.656 04:19:08 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.656 04:19:08 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:13.656 04:19:08 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:13.656 04:19:08 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.656 04:19:08 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:13.656 04:19:08 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:13.914 04:19:08 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:13.914 04:19:08 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:13.914 04:19:08 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:13.914 04:19:08 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:13.914 04:19:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:13.914 04:19:08 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2567519 00:31:13.914 04:19:08 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:13.914 04:19:08 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2567519 00:31:13.914 04:19:08 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2567519 ']' 00:31:13.914 04:19:08 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.914 04:19:08 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:13.914 04:19:08 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.915 04:19:08 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:13.915 04:19:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:13.915 [2024-12-10 04:19:08.109978] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:31:13.915 [2024-12-10 04:19:08.110057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.915 [2024-12-10 04:19:08.180929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.915 [2024-12-10 04:19:08.236196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.915 [2024-12-10 04:19:08.236253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.915 [2024-12-10 04:19:08.236281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:13.915 [2024-12-10 04:19:08.236293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:13.915 [2024-12-10 04:19:08.236303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.915 [2024-12-10 04:19:08.236952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.173 04:19:08 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:14.173 04:19:08 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:31:14.173 04:19:08 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:14.173 04:19:08 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:14.173 04:19:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:14.173 04:19:08 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.173 04:19:08 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:14.173 04:19:08 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:14.173 04:19:08 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.173 04:19:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:14.173 [2024-12-10 04:19:08.380330] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.173 04:19:08 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.173 04:19:08 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:14.173 04:19:08 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:14.173 04:19:08 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:14.173 04:19:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:14.173 ************************************ 00:31:14.173 START TEST fio_dif_1_default 00:31:14.173 ************************************ 00:31:14.173 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:31:14.173 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:14.173 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:14.173 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:14.173 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:14.173 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:14.173 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:14.173 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.173 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:14.173 bdev_null0 00:31:14.173 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.173 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:14.173 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.173 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:14.173 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.173 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:14.174 [2024-12-10 04:19:08.444721] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:14.174 { 00:31:14.174 "params": { 00:31:14.174 "name": "Nvme$subsystem", 00:31:14.174 "trtype": "$TEST_TRANSPORT", 00:31:14.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.174 "adrfam": "ipv4", 00:31:14.174 "trsvcid": "$NVMF_PORT", 00:31:14.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.174 "hdgst": ${hdgst:-false}, 00:31:14.174 "ddgst": ${ddgst:-false} 00:31:14.174 }, 00:31:14.174 "method": "bdev_nvme_attach_controller" 00:31:14.174 } 00:31:14.174 EOF 00:31:14.174 )") 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:14.174 "params": { 00:31:14.174 "name": "Nvme0", 00:31:14.174 "trtype": "tcp", 00:31:14.174 "traddr": "10.0.0.2", 00:31:14.174 "adrfam": "ipv4", 00:31:14.174 "trsvcid": "4420", 00:31:14.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:14.174 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:14.174 "hdgst": false, 00:31:14.174 "ddgst": false 00:31:14.174 }, 00:31:14.174 "method": "bdev_nvme_attach_controller" 00:31:14.174 }' 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:14.174 04:19:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.432 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:14.432 fio-3.35 00:31:14.432 Starting 1 thread 00:31:26.633 00:31:26.633 filename0: (groupid=0, jobs=1): err= 0: pid=2567747: Tue Dec 10 04:19:19 2024 00:31:26.633 read: IOPS=221, BW=887KiB/s (908kB/s)(8896KiB/10027msec) 00:31:26.633 slat (nsec): min=6671, max=74900, avg=8990.99, stdev=4236.21 00:31:26.633 clat (usec): min=536, max=46496, avg=18005.90, stdev=20112.40 00:31:26.633 lat (usec): min=544, max=46540, avg=18014.89, stdev=20112.35 00:31:26.633 clat percentiles (usec): 00:31:26.633 | 1.00th=[ 586], 5.00th=[ 619], 10.00th=[ 635], 20.00th=[ 660], 00:31:26.633 | 30.00th=[ 676], 40.00th=[ 693], 50.00th=[ 742], 60.00th=[41157], 00:31:26.633 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:26.633 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:31:26.633 | 99.99th=[46400] 00:31:26.633 bw ( KiB/s): min= 672, max= 1536, per=100.00%, avg=888.00, stdev=189.42, samples=20 00:31:26.633 iops : min= 168, max= 384, avg=222.00, stdev=47.36, samples=20 00:31:26.633 lat (usec) : 750=52.74%, 1000=4.63% 00:31:26.633 lat (msec) : 50=42.63% 00:31:26.633 cpu : usr=90.62%, sys=9.09%, ctx=26, majf=0, minf=236 00:31:26.633 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.633 issued rwts: total=2224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.633 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:26.633 00:31:26.633 Run status group 0 (all jobs): 00:31:26.633 READ: bw=887KiB/s (908kB/s), 887KiB/s-887KiB/s (908kB/s-908kB/s), io=8896KiB (9110kB), run=10027-10027msec 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.633 00:31:26.633 real 0m11.348s 00:31:26.633 user 0m10.420s 00:31:26.633 sys 0m1.160s 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:26.633 ************************************ 00:31:26.633 END TEST fio_dif_1_default 00:31:26.633 ************************************ 00:31:26.633 04:19:19 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:26.633 04:19:19 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:26.633 04:19:19 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:26.633 04:19:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:26.633 ************************************ 00:31:26.633 START TEST fio_dif_1_multi_subsystems 00:31:26.633 ************************************ 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:26.633 bdev_null0 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:26.633 [2024-12-10 04:19:19.846694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:26.633 bdev_null1 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:26.633 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:26.634 { 00:31:26.634 "params": { 00:31:26.634 "name": "Nvme$subsystem", 00:31:26.634 "trtype": "$TEST_TRANSPORT", 00:31:26.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:26.634 "adrfam": "ipv4", 00:31:26.634 "trsvcid": "$NVMF_PORT", 00:31:26.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:26.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:26.634 "hdgst": ${hdgst:-false}, 00:31:26.634 "ddgst": ${ddgst:-false} 00:31:26.634 }, 00:31:26.634 "method": "bdev_nvme_attach_controller" 00:31:26.634 } 00:31:26.634 EOF 00:31:26.634 )") 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:26.634 { 00:31:26.634 "params": { 00:31:26.634 "name": "Nvme$subsystem", 00:31:26.634 "trtype": "$TEST_TRANSPORT", 00:31:26.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:26.634 "adrfam": "ipv4", 00:31:26.634 "trsvcid": "$NVMF_PORT", 00:31:26.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:26.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:26.634 "hdgst": ${hdgst:-false}, 00:31:26.634 "ddgst": ${ddgst:-false} 00:31:26.634 }, 00:31:26.634 "method": "bdev_nvme_attach_controller" 00:31:26.634 } 00:31:26.634 EOF 00:31:26.634 )") 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:26.634 "params": { 00:31:26.634 "name": "Nvme0", 00:31:26.634 "trtype": "tcp", 00:31:26.634 "traddr": "10.0.0.2", 00:31:26.634 "adrfam": "ipv4", 00:31:26.634 "trsvcid": "4420", 00:31:26.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:26.634 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:26.634 "hdgst": false, 00:31:26.634 "ddgst": false 00:31:26.634 }, 00:31:26.634 "method": "bdev_nvme_attach_controller" 00:31:26.634 },{ 00:31:26.634 "params": { 00:31:26.634 "name": "Nvme1", 00:31:26.634 "trtype": "tcp", 00:31:26.634 "traddr": "10.0.0.2", 00:31:26.634 "adrfam": "ipv4", 00:31:26.634 "trsvcid": "4420", 00:31:26.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:26.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:26.634 "hdgst": false, 00:31:26.634 "ddgst": false 00:31:26.634 }, 00:31:26.634 "method": "bdev_nvme_attach_controller" 00:31:26.634 }' 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:26.634 04:19:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.634 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:26.634 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:26.634 fio-3.35 00:31:26.634 Starting 2 threads 00:31:36.610 00:31:36.610 filename0: (groupid=0, jobs=1): err= 0: pid=2569149: Tue Dec 10 04:19:30 2024 00:31:36.610 read: IOPS=198, BW=792KiB/s (811kB/s)(7936KiB/10020msec) 00:31:36.610 slat (nsec): min=6887, max=73165, avg=8569.80, stdev=3037.35 00:31:36.610 clat (usec): min=535, max=46582, avg=20173.72, stdev=20331.17 00:31:36.610 lat (usec): min=542, max=46618, avg=20182.29, stdev=20331.05 00:31:36.610 clat percentiles (usec): 00:31:36.610 | 1.00th=[ 562], 5.00th=[ 586], 10.00th=[ 594], 20.00th=[ 619], 00:31:36.610 | 30.00th=[ 644], 40.00th=[ 685], 50.00th=[ 824], 60.00th=[41157], 00:31:36.610 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:36.610 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:31:36.610 | 99.99th=[46400] 00:31:36.610 bw ( KiB/s): min= 704, max= 928, per=66.51%, avg=792.00, stdev=53.82, samples=20 00:31:36.610 iops : min= 176, max= 232, avg=198.00, stdev=13.46, samples=20 00:31:36.610 lat (usec) : 750=47.98%, 1000=3.83% 00:31:36.610 lat (msec) : 2=0.20%, 50=47.98% 00:31:36.610 cpu : usr=94.61%, sys=5.09%, ctx=13, majf=0, minf=153 00:31:36.610 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.610 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.610 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:36.611 filename1: (groupid=0, jobs=1): err= 0: pid=2569150: Tue Dec 10 04:19:30 2024 00:31:36.611 read: IOPS=99, BW=398KiB/s (407kB/s)(3984KiB/10022msec) 00:31:36.611 slat (nsec): min=6071, max=36208, avg=8763.08, stdev=2941.24 00:31:36.611 clat (usec): min=556, max=43188, avg=40220.26, stdev=5534.03 00:31:36.611 lat (usec): min=563, max=43202, avg=40229.03, stdev=5533.77 00:31:36.611 clat percentiles (usec): 00:31:36.611 | 1.00th=[ 660], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:36.611 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:36.611 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:36.611 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:31:36.611 | 99.99th=[43254] 00:31:36.611 bw ( KiB/s): min= 384, max= 448, per=33.29%, avg=396.80, stdev=19.14, samples=20 00:31:36.611 iops : min= 96, max= 112, avg=99.20, stdev= 4.79, samples=20 00:31:36.611 lat (usec) : 750=1.61% 00:31:36.611 lat (msec) : 10=0.40%, 50=97.99% 00:31:36.611 cpu : usr=95.08%, sys=4.62%, ctx=15, majf=0, minf=157 00:31:36.611 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.611 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.611 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:36.611 00:31:36.611 Run status group 0 (all jobs): 00:31:36.611 READ: bw=1189KiB/s (1218kB/s), 398KiB/s-792KiB/s (407kB/s-811kB/s), io=11.6MiB (12.2MB), run=10020-10022msec 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.870 00:31:36.870 real 0m11.344s 00:31:36.870 user 0m20.283s 00:31:36.870 sys 0m1.269s 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:36.870 04:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:36.870 ************************************ 00:31:36.870 END TEST fio_dif_1_multi_subsystems 00:31:36.870 ************************************ 00:31:36.870 04:19:31 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:36.870 04:19:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:36.870 04:19:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:36.870 04:19:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:36.870 ************************************ 00:31:36.870 START TEST fio_dif_rand_params 00:31:36.870 ************************************ 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.870 bdev_null0 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.870 [2024-12-10 04:19:31.238961] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:36.870 { 00:31:36.870 "params": { 00:31:36.870 "name": "Nvme$subsystem", 00:31:36.870 "trtype": "$TEST_TRANSPORT", 00:31:36.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.870 "adrfam": "ipv4", 00:31:36.870 "trsvcid": "$NVMF_PORT", 00:31:36.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.870 "hdgst": ${hdgst:-false}, 00:31:36.870 "ddgst": ${ddgst:-false} 00:31:36.870 }, 00:31:36.870 "method": "bdev_nvme_attach_controller" 00:31:36.870 } 00:31:36.870 EOF 00:31:36.870 )") 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:36.870 04:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:36.870 "params": { 00:31:36.870 "name": "Nvme0", 00:31:36.870 "trtype": "tcp", 00:31:36.870 "traddr": "10.0.0.2", 00:31:36.870 "adrfam": "ipv4", 00:31:36.870 "trsvcid": "4420", 00:31:36.870 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:36.870 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:36.870 "hdgst": false, 00:31:36.870 "ddgst": false 00:31:36.870 }, 00:31:36.870 "method": "bdev_nvme_attach_controller" 00:31:36.870 }' 00:31:37.129 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:37.129 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:37.129 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:37.129 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:37.129 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:37.129 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:37.129 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:37.129 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:37.129 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:37.129 04:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:37.387 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:37.387 ... 00:31:37.387 fio-3.35 00:31:37.387 Starting 3 threads 00:31:43.947 00:31:43.947 filename0: (groupid=0, jobs=1): err= 0: pid=2570547: Tue Dec 10 04:19:37 2024 00:31:43.947 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(144MiB/5005msec) 00:31:43.947 slat (nsec): min=6432, max=45958, avg=15049.36, stdev=3932.82 00:31:43.947 clat (usec): min=5227, max=52557, avg=13024.24, stdev=2740.48 00:31:43.947 lat (usec): min=5241, max=52578, avg=13039.29, stdev=2740.58 00:31:43.947 clat percentiles (usec): 00:31:43.947 | 1.00th=[ 7898], 5.00th=[10028], 10.00th=[10814], 20.00th=[11338], 00:31:43.947 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12911], 60.00th=[13304], 00:31:43.947 | 70.00th=[13960], 80.00th=[14484], 90.00th=[15401], 95.00th=[15926], 00:31:43.947 | 99.00th=[17171], 99.50th=[17433], 99.90th=[52167], 99.95th=[52691], 00:31:43.947 | 99.99th=[52691] 00:31:43.947 bw ( KiB/s): min=27392, max=32256, per=34.16%, avg=29414.40, stdev=1383.61, samples=10 00:31:43.947 iops : min= 214, max= 252, avg=229.80, stdev=10.81, samples=10 00:31:43.947 lat (msec) : 10=5.04%, 20=94.70%, 100=0.26% 00:31:43.947 cpu : usr=93.17%, sys=6.29%, ctx=10, majf=0, minf=125 00:31:43.947 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.947 issued rwts: total=1151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.947 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:43.947 filename0: (groupid=0, jobs=1): err= 0: pid=2570548: Tue Dec 10 04:19:37 2024 00:31:43.947 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(145MiB/5007msec) 00:31:43.947 slat (nsec): min=7577, max=90208, avg=16663.07, stdev=5510.42 00:31:43.947 clat (usec): min=7210, max=53822, avg=12925.91, stdev=3906.39 00:31:43.948 lat (usec): min=7229, max=53834, avg=12942.57, stdev=3906.13 00:31:43.948 clat percentiles (usec): 00:31:43.948 | 1.00th=[ 8291], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11207], 00:31:43.948 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12518], 60.00th=[13042], 00:31:43.948 | 70.00th=[13435], 80.00th=[14222], 90.00th=[15139], 95.00th=[15664], 00:31:43.948 | 99.00th=[17433], 99.50th=[51643], 99.90th=[53216], 99.95th=[53740], 00:31:43.948 | 99.99th=[53740] 00:31:43.948 bw ( KiB/s): min=26368, max=31488, per=34.40%, avg=29619.20, stdev=1417.92, samples=10 00:31:43.948 iops : min= 206, max= 246, avg=231.40, stdev=11.08, samples=10 00:31:43.948 lat (msec) : 10=5.17%, 20=94.05%, 100=0.78% 00:31:43.948 cpu : usr=93.19%, sys=6.29%, ctx=13, majf=0, minf=114 00:31:43.948 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.948 issued rwts: total=1160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.948 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:43.948 filename0: (groupid=0, jobs=1): err= 0: pid=2570549: Tue Dec 10 04:19:37 2024 00:31:43.948 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(135MiB/5045msec) 00:31:43.948 slat (nsec): min=5847, max=83882, avg=15155.76, stdev=4800.29 00:31:43.948 clat (usec): min=8584, max=58819, avg=13917.30, stdev=3356.49 00:31:43.948 lat (usec): min=8597, max=58836, avg=13932.46, stdev=3356.64 00:31:43.948 clat percentiles (usec): 00:31:43.948 | 1.00th=[ 9110], 5.00th=[10552], 10.00th=[11207], 20.00th=[11863], 00:31:43.948 | 30.00th=[12518], 40.00th=[13173], 50.00th=[13829], 60.00th=[14353], 00:31:43.948 | 70.00th=[15008], 80.00th=[15533], 90.00th=[16319], 95.00th=[16909], 00:31:43.948 | 99.00th=[17957], 99.50th=[18482], 99.90th=[56886], 99.95th=[58983], 00:31:43.948 | 99.99th=[58983] 00:31:43.948 bw ( KiB/s): min=24576, max=30208, per=32.11%, avg=27648.00, stdev=1521.71, samples=10 00:31:43.948 iops : min= 192, max= 236, avg=216.00, stdev=11.89, samples=10 00:31:43.948 lat (msec) : 10=2.03%, 20=97.51%, 50=0.18%, 100=0.28% 00:31:43.948 cpu : usr=93.10%, sys=6.36%, ctx=9, majf=0, minf=104 00:31:43.948 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.948 issued rwts: total=1083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.948 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:43.948 00:31:43.948 Run status group 0 (all jobs): 00:31:43.948 READ: bw=84.1MiB/s (88.2MB/s), 26.8MiB/s-29.0MiB/s (28.1MB/s-30.4MB/s), io=424MiB (445MB), run=5005-5045msec 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.948 bdev_null0 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.948 [2024-12-10 04:19:37.401086] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.948 bdev_null1 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.948 bdev_null2 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:43.948 04:19:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:43.948 { 00:31:43.948 "params": { 00:31:43.948 "name": "Nvme$subsystem", 00:31:43.948 "trtype": "$TEST_TRANSPORT", 00:31:43.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.949 "adrfam": "ipv4", 00:31:43.949 "trsvcid": "$NVMF_PORT", 00:31:43.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.949 "hdgst": ${hdgst:-false}, 00:31:43.949 "ddgst": ${ddgst:-false} 00:31:43.949 }, 00:31:43.949 "method": "bdev_nvme_attach_controller" 00:31:43.949 } 00:31:43.949 EOF 00:31:43.949 )") 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:43.949 { 00:31:43.949 "params": { 00:31:43.949 "name": "Nvme$subsystem", 00:31:43.949 "trtype": "$TEST_TRANSPORT", 00:31:43.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.949 "adrfam": "ipv4", 00:31:43.949 "trsvcid": "$NVMF_PORT", 00:31:43.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.949 "hdgst": ${hdgst:-false}, 00:31:43.949 "ddgst": ${ddgst:-false} 00:31:43.949 }, 00:31:43.949 "method": "bdev_nvme_attach_controller" 00:31:43.949 } 00:31:43.949 EOF 00:31:43.949 )") 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:43.949 { 00:31:43.949 "params": { 00:31:43.949 "name": "Nvme$subsystem", 00:31:43.949 "trtype": "$TEST_TRANSPORT", 00:31:43.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.949 "adrfam": "ipv4", 00:31:43.949 "trsvcid": "$NVMF_PORT", 00:31:43.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.949 "hdgst": ${hdgst:-false}, 00:31:43.949 "ddgst": ${ddgst:-false} 00:31:43.949 }, 00:31:43.949 "method": "bdev_nvme_attach_controller" 00:31:43.949 } 00:31:43.949 EOF 00:31:43.949 )") 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:43.949 "params": { 00:31:43.949 "name": "Nvme0", 00:31:43.949 "trtype": "tcp", 00:31:43.949 "traddr": "10.0.0.2", 00:31:43.949 "adrfam": "ipv4", 00:31:43.949 "trsvcid": "4420", 00:31:43.949 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:43.949 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:43.949 "hdgst": false, 00:31:43.949 "ddgst": false 00:31:43.949 }, 00:31:43.949 "method": "bdev_nvme_attach_controller" 00:31:43.949 },{ 00:31:43.949 "params": { 00:31:43.949 "name": "Nvme1", 00:31:43.949 "trtype": "tcp", 00:31:43.949 "traddr": "10.0.0.2", 00:31:43.949 "adrfam": "ipv4", 00:31:43.949 "trsvcid": "4420", 00:31:43.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:43.949 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:43.949 "hdgst": false, 00:31:43.949 "ddgst": false 00:31:43.949 }, 00:31:43.949 "method": "bdev_nvme_attach_controller" 00:31:43.949 },{ 00:31:43.949 "params": { 00:31:43.949 "name": "Nvme2", 00:31:43.949 "trtype": "tcp", 00:31:43.949 "traddr": "10.0.0.2", 00:31:43.949 "adrfam": "ipv4", 00:31:43.949 "trsvcid": "4420", 00:31:43.949 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:43.949 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:43.949 "hdgst": false, 00:31:43.949 "ddgst": false 00:31:43.949 }, 00:31:43.949 "method": "bdev_nvme_attach_controller" 00:31:43.949 }' 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:43.949 04:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:43.949 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:43.949 ... 00:31:43.949 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:43.949 ... 00:31:43.949 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:43.949 ... 00:31:43.949 fio-3.35 00:31:43.949 Starting 24 threads 00:31:56.154 00:31:56.154 filename0: (groupid=0, jobs=1): err= 0: pid=2571412: Tue Dec 10 04:19:48 2024 00:31:56.154 read: IOPS=465, BW=1860KiB/s (1905kB/s)(18.2MiB/10012msec) 00:31:56.154 slat (nsec): min=8205, max=97496, avg=40609.50, stdev=12165.80 00:31:56.154 clat (usec): min=18071, max=86262, avg=34035.98, stdev=3339.16 00:31:56.154 lat (usec): min=18080, max=86296, avg=34076.59, stdev=3338.93 00:31:56.154 clat percentiles (usec): 00:31:56.154 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33424], 20.00th=[33424], 00:31:56.154 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.154 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.154 | 99.00th=[39584], 99.50th=[43254], 99.90th=[86508], 99.95th=[86508], 00:31:56.154 | 99.99th=[86508] 00:31:56.154 bw ( KiB/s): min= 1539, max= 1920, per=4.14%, avg=1852.79, stdev=98.33, samples=19 00:31:56.154 iops : min= 384, max= 480, avg=463.16, stdev=24.71, samples=19 00:31:56.154 lat (msec) : 20=0.34%, 50=99.31%, 100=0.34% 00:31:56.154 cpu : usr=97.17%, sys=1.76%, ctx=281, majf=0, minf=28 00:31:56.154 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:56.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.154 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.154 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.154 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.154 filename0: (groupid=0, jobs=1): err= 0: pid=2571413: Tue Dec 10 04:19:48 2024 00:31:56.154 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10007msec) 00:31:56.154 slat (usec): min=8, max=109, avg=36.83, stdev=15.92 00:31:56.154 clat (usec): min=16116, max=43398, avg=33851.92, stdev=1787.39 00:31:56.154 lat (usec): min=16213, max=43426, avg=33888.75, stdev=1786.31 00:31:56.154 clat percentiles (usec): 00:31:56.154 | 1.00th=[19530], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:31:56.154 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.154 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.154 | 99.00th=[35914], 99.50th=[40109], 99.90th=[43254], 99.95th=[43254], 00:31:56.154 | 99.99th=[43254] 00:31:56.154 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1868.80, stdev=64.34, samples=20 00:31:56.154 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:31:56.154 lat (msec) : 20=1.02%, 50=98.98% 00:31:56.154 cpu : usr=97.80%, sys=1.49%, ctx=99, majf=0, minf=50 00:31:56.154 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:56.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.154 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.154 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.154 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.154 filename0: (groupid=0, jobs=1): err= 0: pid=2571414: Tue Dec 10 04:19:48 2024 00:31:56.154 read: IOPS=466, BW=1866KiB/s (1911kB/s)(18.2MiB/10015msec) 00:31:56.154 slat (nsec): min=14739, max=87577, avg=39129.50, stdev=11511.15 00:31:56.154 clat (usec): min=22908, max=43273, avg=33948.00, stdev=1071.95 00:31:56.154 lat (usec): min=22940, max=43302, avg=33987.13, stdev=1071.05 00:31:56.154 clat percentiles (usec): 00:31:56.154 | 1.00th=[33162], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:31:56.154 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.154 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.154 | 99.00th=[39584], 99.50th=[40109], 99.90th=[43254], 99.95th=[43254], 00:31:56.154 | 99.99th=[43254] 00:31:56.154 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1859.37, stdev=65.66, samples=19 00:31:56.154 iops : min= 448, max= 480, avg=464.84, stdev=16.42, samples=19 00:31:56.154 lat (msec) : 50=100.00% 00:31:56.154 cpu : usr=98.26%, sys=1.32%, ctx=13, majf=0, minf=40 00:31:56.154 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:56.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.155 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.155 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.155 filename0: (groupid=0, jobs=1): err= 0: pid=2571415: Tue Dec 10 04:19:48 2024 00:31:56.155 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10007msec) 00:31:56.155 slat (usec): min=4, max=110, avg=58.74, stdev=26.04 00:31:56.155 clat (usec): min=18367, max=50300, avg=33742.34, stdev=1488.41 00:31:56.155 lat (usec): min=18382, max=50367, avg=33801.08, stdev=1483.84 00:31:56.155 clat percentiles (usec): 00:31:56.155 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:56.155 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.155 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.155 | 99.00th=[36439], 99.50th=[42730], 99.90th=[44303], 99.95th=[44827], 00:31:56.155 | 99.99th=[50070] 00:31:56.155 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1859.37, stdev=65.66, samples=19 00:31:56.155 iops : min= 448, max= 480, avg=464.84, stdev=16.42, samples=19 00:31:56.155 lat (msec) : 20=0.34%, 50=99.61%, 100=0.04% 00:31:56.155 cpu : usr=96.74%, sys=1.97%, ctx=152, majf=0, minf=31 00:31:56.155 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:56.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.155 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.155 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.155 filename0: (groupid=0, jobs=1): err= 0: pid=2571416: Tue Dec 10 04:19:48 2024 00:31:56.155 read: IOPS=467, BW=1870KiB/s (1915kB/s)(18.3MiB/10027msec) 00:31:56.155 slat (usec): min=6, max=120, avg=32.17, stdev=25.29 00:31:56.155 clat (usec): min=16300, max=43471, avg=33954.02, stdev=1604.42 00:31:56.155 lat (usec): min=16327, max=43492, avg=33986.18, stdev=1600.56 00:31:56.155 clat percentiles (usec): 00:31:56.155 | 1.00th=[31589], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:31:56.155 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.155 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.155 | 99.00th=[39584], 99.50th=[39584], 99.90th=[43254], 99.95th=[43254], 00:31:56.155 | 99.99th=[43254] 00:31:56.155 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1868.80, stdev=64.34, samples=20 00:31:56.155 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:31:56.155 lat (msec) : 20=0.68%, 50=99.32% 00:31:56.155 cpu : usr=97.98%, sys=1.60%, ctx=34, majf=0, minf=32 00:31:56.155 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:56.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.155 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.155 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.155 filename0: (groupid=0, jobs=1): err= 0: pid=2571417: Tue Dec 10 04:19:48 2024 00:31:56.155 read: IOPS=466, BW=1866KiB/s (1911kB/s)(18.2MiB/10015msec) 00:31:56.155 slat (nsec): min=8263, max=89164, avg=34551.13, stdev=9004.02 00:31:56.155 clat (usec): min=14397, max=58412, avg=34000.12, stdev=2086.12 00:31:56.155 lat (usec): min=14431, max=58436, avg=34034.67, stdev=2085.00 00:31:56.155 clat percentiles (usec): 00:31:56.155 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:31:56.155 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.155 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:31:56.155 | 99.00th=[41157], 99.50th=[41681], 99.90th=[58459], 99.95th=[58459], 00:31:56.155 | 99.99th=[58459] 00:31:56.155 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1859.37, stdev=78.31, samples=19 00:31:56.155 iops : min= 416, max= 480, avg=464.84, stdev=19.58, samples=19 00:31:56.155 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:31:56.155 cpu : usr=98.13%, sys=1.42%, ctx=13, majf=0, minf=39 00:31:56.155 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:56.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.155 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.155 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.155 filename0: (groupid=0, jobs=1): err= 0: pid=2571418: Tue Dec 10 04:19:48 2024 00:31:56.155 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10007msec) 00:31:56.155 slat (nsec): min=11077, max=84238, avg=37148.71, stdev=10697.91 00:31:56.155 clat (usec): min=14842, max=43293, avg=33834.08, stdev=1785.95 00:31:56.155 lat (usec): min=14883, max=43316, avg=33871.23, stdev=1785.26 00:31:56.155 clat percentiles (usec): 00:31:56.155 | 1.00th=[20579], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:31:56.155 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.155 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.155 | 99.00th=[35914], 99.50th=[40109], 99.90th=[43254], 99.95th=[43254], 00:31:56.155 | 99.99th=[43254] 00:31:56.155 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1868.80, stdev=64.34, samples=20 00:31:56.155 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:31:56.155 lat (msec) : 20=0.98%, 50=99.02% 00:31:56.155 cpu : usr=98.26%, sys=1.26%, ctx=22, majf=0, minf=39 00:31:56.155 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:56.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.155 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.155 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.155 filename0: (groupid=0, jobs=1): err= 0: pid=2571419: Tue Dec 10 04:19:48 2024 00:31:56.155 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10007msec) 00:31:56.155 slat (usec): min=11, max=143, avg=57.22, stdev=22.23 00:31:56.155 clat (usec): min=16628, max=43315, avg=33650.04, stdev=1772.69 00:31:56.155 lat (usec): min=16653, max=43335, avg=33707.27, stdev=1774.11 00:31:56.155 clat percentiles (usec): 00:31:56.155 | 1.00th=[19268], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:31:56.155 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.155 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.155 | 99.00th=[35914], 99.50th=[40109], 99.90th=[43254], 99.95th=[43254], 00:31:56.155 | 99.99th=[43254] 00:31:56.155 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1868.80, stdev=64.34, samples=20 00:31:56.155 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:31:56.155 lat (msec) : 20=1.02%, 50=98.98% 00:31:56.155 cpu : usr=98.34%, sys=1.22%, ctx=16, majf=0, minf=27 00:31:56.155 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:56.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.155 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.155 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.155 filename1: (groupid=0, jobs=1): err= 0: pid=2571420: Tue Dec 10 04:19:48 2024 00:31:56.155 read: IOPS=466, BW=1866KiB/s (1911kB/s)(18.2MiB/10013msec) 00:31:56.155 slat (nsec): min=7190, max=93896, avg=36397.78, stdev=12922.94 00:31:56.155 clat (usec): min=22992, max=43247, avg=33962.23, stdev=1044.98 00:31:56.155 lat (usec): min=23022, max=43276, avg=33998.62, stdev=1042.85 00:31:56.155 clat percentiles (usec): 00:31:56.155 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33424], 20.00th=[33424], 00:31:56.155 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.155 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:31:56.155 | 99.00th=[37487], 99.50th=[40109], 99.90th=[43254], 99.95th=[43254], 00:31:56.155 | 99.99th=[43254] 00:31:56.155 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1859.37, stdev=65.66, samples=19 00:31:56.155 iops : min= 448, max= 480, avg=464.84, stdev=16.42, samples=19 00:31:56.155 lat (msec) : 50=100.00% 00:31:56.155 cpu : usr=97.75%, sys=1.62%, ctx=86, majf=0, minf=23 00:31:56.155 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:56.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.155 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.155 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.155 filename1: (groupid=0, jobs=1): err= 0: pid=2571421: Tue Dec 10 04:19:48 2024 00:31:56.155 read: IOPS=466, BW=1866KiB/s (1911kB/s)(18.2MiB/10015msec) 00:31:56.155 slat (nsec): min=11968, max=93241, avg=37049.99, stdev=9229.28 00:31:56.155 clat (usec): min=14372, max=73309, avg=33962.20, stdev=2256.01 00:31:56.155 lat (usec): min=14399, max=73342, avg=33999.25, stdev=2255.21 00:31:56.155 clat percentiles (usec): 00:31:56.155 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:31:56.155 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.155 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.155 | 99.00th=[41157], 99.50th=[41681], 99.90th=[57934], 99.95th=[57934], 00:31:56.155 | 99.99th=[72877] 00:31:56.155 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1859.37, stdev=78.31, samples=19 00:31:56.155 iops : min= 416, max= 480, avg=464.84, stdev=19.58, samples=19 00:31:56.155 lat (msec) : 20=0.36%, 50=99.25%, 100=0.39% 00:31:56.155 cpu : usr=98.15%, sys=1.43%, ctx=11, majf=0, minf=22 00:31:56.155 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:56.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.155 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.155 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.155 filename1: (groupid=0, jobs=1): err= 0: pid=2571422: Tue Dec 10 04:19:48 2024 00:31:56.155 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10007msec) 00:31:56.155 slat (usec): min=4, max=108, avg=36.54, stdev=20.55 00:31:56.155 clat (usec): min=18512, max=44735, avg=33932.47, stdev=1367.44 00:31:56.155 lat (usec): min=18534, max=44768, avg=33969.02, stdev=1365.81 00:31:56.155 clat percentiles (usec): 00:31:56.155 | 1.00th=[32637], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:31:56.155 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.155 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.155 | 99.00th=[36963], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:31:56.155 | 99.99th=[44827] 00:31:56.155 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1859.37, stdev=65.66, samples=19 00:31:56.155 iops : min= 448, max= 480, avg=464.84, stdev=16.42, samples=19 00:31:56.155 lat (msec) : 20=0.34%, 50=99.66% 00:31:56.155 cpu : usr=98.04%, sys=1.53%, ctx=14, majf=0, minf=24 00:31:56.156 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:56.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.156 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.156 filename1: (groupid=0, jobs=1): err= 0: pid=2571423: Tue Dec 10 04:19:48 2024 00:31:56.156 read: IOPS=466, BW=1866KiB/s (1911kB/s)(18.2MiB/10014msec) 00:31:56.156 slat (nsec): min=8621, max=79635, avg=37005.24, stdev=9949.37 00:31:56.156 clat (usec): min=14432, max=57516, avg=33955.18, stdev=2046.42 00:31:56.156 lat (usec): min=14455, max=57547, avg=33992.19, stdev=2046.23 00:31:56.156 clat percentiles (usec): 00:31:56.156 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:31:56.156 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.156 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.156 | 99.00th=[41157], 99.50th=[41681], 99.90th=[57410], 99.95th=[57410], 00:31:56.156 | 99.99th=[57410] 00:31:56.156 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1859.53, stdev=77.89, samples=19 00:31:56.156 iops : min= 416, max= 480, avg=464.84, stdev=19.58, samples=19 00:31:56.156 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:31:56.156 cpu : usr=97.62%, sys=1.48%, ctx=165, majf=0, minf=36 00:31:56.156 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:56.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.156 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.156 filename1: (groupid=0, jobs=1): err= 0: pid=2571424: Tue Dec 10 04:19:48 2024 00:31:56.156 read: IOPS=466, BW=1865KiB/s (1910kB/s)(18.2MiB/10014msec) 00:31:56.156 slat (usec): min=8, max=100, avg=39.38, stdev=13.47 00:31:56.156 clat (usec): min=14458, max=58322, avg=33944.34, stdev=2026.80 00:31:56.156 lat (usec): min=14492, max=58342, avg=33983.72, stdev=2026.54 00:31:56.156 clat percentiles (usec): 00:31:56.156 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33424], 20.00th=[33424], 00:31:56.156 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.156 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.156 | 99.00th=[41157], 99.50th=[41681], 99.90th=[58459], 99.95th=[58459], 00:31:56.156 | 99.99th=[58459] 00:31:56.156 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1859.37, stdev=78.31, samples=19 00:31:56.156 iops : min= 416, max= 480, avg=464.84, stdev=19.58, samples=19 00:31:56.156 lat (msec) : 20=0.30%, 50=99.36%, 100=0.34% 00:31:56.156 cpu : usr=96.83%, sys=2.06%, ctx=139, majf=0, minf=24 00:31:56.156 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:56.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.156 issued rwts: total=4670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.156 filename1: (groupid=0, jobs=1): err= 0: pid=2571425: Tue Dec 10 04:19:48 2024 00:31:56.156 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10007msec) 00:31:56.156 slat (usec): min=12, max=121, avg=64.39, stdev=22.07 00:31:56.156 clat (usec): min=15951, max=43188, avg=33580.67, stdev=1802.84 00:31:56.156 lat (usec): min=16005, max=43276, avg=33645.06, stdev=1802.64 00:31:56.156 clat percentiles (usec): 00:31:56.156 | 1.00th=[19268], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:56.156 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:31:56.156 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.156 | 99.00th=[35390], 99.50th=[39584], 99.90th=[42730], 99.95th=[43254], 00:31:56.156 | 99.99th=[43254] 00:31:56.156 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1868.80, stdev=64.34, samples=20 00:31:56.156 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:31:56.156 lat (msec) : 20=1.02%, 50=98.98% 00:31:56.156 cpu : usr=98.10%, sys=1.34%, ctx=59, majf=0, minf=43 00:31:56.156 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:56.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.156 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.156 filename1: (groupid=0, jobs=1): err= 0: pid=2571426: Tue Dec 10 04:19:48 2024 00:31:56.156 read: IOPS=466, BW=1865KiB/s (1910kB/s)(18.2MiB/10015msec) 00:31:56.156 slat (usec): min=11, max=108, avg=42.98, stdev=15.06 00:31:56.156 clat (usec): min=14461, max=58486, avg=33921.27, stdev=2040.72 00:31:56.156 lat (usec): min=14488, max=58515, avg=33964.25, stdev=2040.28 00:31:56.156 clat percentiles (usec): 00:31:56.156 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:31:56.156 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.156 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.156 | 99.00th=[41157], 99.50th=[41681], 99.90th=[58459], 99.95th=[58459], 00:31:56.156 | 99.99th=[58459] 00:31:56.156 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1859.37, stdev=78.31, samples=19 00:31:56.156 iops : min= 416, max= 480, avg=464.84, stdev=19.58, samples=19 00:31:56.156 lat (msec) : 20=0.30%, 50=99.36%, 100=0.34% 00:31:56.156 cpu : usr=97.64%, sys=1.63%, ctx=96, majf=0, minf=27 00:31:56.156 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:56.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.156 issued rwts: total=4670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.156 filename1: (groupid=0, jobs=1): err= 0: pid=2571427: Tue Dec 10 04:19:48 2024 00:31:56.156 read: IOPS=465, BW=1862KiB/s (1907kB/s)(18.2MiB/10002msec) 00:31:56.156 slat (usec): min=8, max=116, avg=42.85, stdev=22.97 00:31:56.156 clat (usec): min=18326, max=73104, avg=33989.18, stdev=2621.74 00:31:56.156 lat (usec): min=18340, max=73125, avg=34032.03, stdev=2620.51 00:31:56.156 clat percentiles (usec): 00:31:56.156 | 1.00th=[32375], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:31:56.156 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.156 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.156 | 99.00th=[36963], 99.50th=[44303], 99.90th=[72877], 99.95th=[72877], 00:31:56.156 | 99.99th=[72877] 00:31:56.156 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1852.79, stdev=77.91, samples=19 00:31:56.156 iops : min= 416, max= 480, avg=463.16, stdev=19.58, samples=19 00:31:56.156 lat (msec) : 20=0.34%, 50=99.31%, 100=0.34% 00:31:56.156 cpu : usr=97.26%, sys=1.82%, ctx=85, majf=0, minf=29 00:31:56.156 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:56.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.156 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.156 filename2: (groupid=0, jobs=1): err= 0: pid=2571428: Tue Dec 10 04:19:48 2024 00:31:56.156 read: IOPS=468, BW=1873KiB/s (1918kB/s)(18.3MiB/10013msec) 00:31:56.156 slat (usec): min=8, max=113, avg=30.28, stdev=14.29 00:31:56.156 clat (usec): min=16877, max=42117, avg=33929.89, stdev=1615.41 00:31:56.156 lat (usec): min=16894, max=42147, avg=33960.17, stdev=1614.21 00:31:56.156 clat percentiles (usec): 00:31:56.156 | 1.00th=[25297], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:31:56.156 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.156 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:31:56.156 | 99.00th=[35390], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:31:56.156 | 99.99th=[42206] 00:31:56.156 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1868.80, stdev=64.34, samples=20 00:31:56.156 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:31:56.156 lat (msec) : 20=0.30%, 50=99.70% 00:31:56.156 cpu : usr=97.53%, sys=1.63%, ctx=128, majf=0, minf=41 00:31:56.156 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:56.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.156 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.156 filename2: (groupid=0, jobs=1): err= 0: pid=2571429: Tue Dec 10 04:19:48 2024 00:31:56.156 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10007msec) 00:31:56.156 slat (nsec): min=14138, max=83008, avg=38353.49, stdev=10941.91 00:31:56.156 clat (usec): min=16310, max=43277, avg=33817.08, stdev=1753.79 00:31:56.156 lat (usec): min=16343, max=43301, avg=33855.44, stdev=1753.35 00:31:56.156 clat percentiles (usec): 00:31:56.156 | 1.00th=[26608], 5.00th=[33424], 10.00th=[33424], 20.00th=[33424], 00:31:56.156 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.156 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.156 | 99.00th=[35914], 99.50th=[40109], 99.90th=[43254], 99.95th=[43254], 00:31:56.156 | 99.99th=[43254] 00:31:56.156 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1868.80, stdev=64.34, samples=20 00:31:56.156 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:31:56.156 lat (msec) : 20=0.98%, 50=99.02% 00:31:56.156 cpu : usr=97.50%, sys=1.60%, ctx=127, majf=0, minf=30 00:31:56.156 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:56.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.156 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.156 filename2: (groupid=0, jobs=1): err= 0: pid=2571430: Tue Dec 10 04:19:48 2024 00:31:56.156 read: IOPS=466, BW=1866KiB/s (1911kB/s)(18.2MiB/10014msec) 00:31:56.156 slat (usec): min=8, max=104, avg=43.08, stdev=17.53 00:31:56.156 clat (usec): min=14847, max=72449, avg=33919.89, stdev=2151.90 00:31:56.156 lat (usec): min=14857, max=72473, avg=33962.98, stdev=2151.40 00:31:56.156 clat percentiles (usec): 00:31:56.156 | 1.00th=[32637], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:31:56.156 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.156 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.156 | 99.00th=[41157], 99.50th=[41681], 99.90th=[57410], 99.95th=[57410], 00:31:56.156 | 99.99th=[72877] 00:31:56.156 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1859.53, stdev=77.89, samples=19 00:31:56.156 iops : min= 416, max= 480, avg=464.84, stdev=19.58, samples=19 00:31:56.156 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:31:56.157 cpu : usr=97.22%, sys=1.75%, ctx=99, majf=0, minf=31 00:31:56.157 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:56.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.157 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.157 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.157 filename2: (groupid=0, jobs=1): err= 0: pid=2571431: Tue Dec 10 04:19:48 2024 00:31:56.157 read: IOPS=466, BW=1866KiB/s (1911kB/s)(18.2MiB/10014msec) 00:31:56.157 slat (nsec): min=11983, max=93237, avg=36573.77, stdev=9059.14 00:31:56.157 clat (usec): min=14382, max=57894, avg=33973.91, stdev=2061.81 00:31:56.157 lat (usec): min=14421, max=57916, avg=34010.49, stdev=2061.18 00:31:56.157 clat percentiles (usec): 00:31:56.157 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:31:56.157 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.157 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.157 | 99.00th=[41157], 99.50th=[41681], 99.90th=[57934], 99.95th=[57934], 00:31:56.157 | 99.99th=[57934] 00:31:56.157 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1859.37, stdev=78.31, samples=19 00:31:56.157 iops : min= 416, max= 480, avg=464.84, stdev=19.58, samples=19 00:31:56.157 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:31:56.157 cpu : usr=97.83%, sys=1.58%, ctx=57, majf=0, minf=23 00:31:56.157 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:56.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.157 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.157 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.157 filename2: (groupid=0, jobs=1): err= 0: pid=2571432: Tue Dec 10 04:19:48 2024 00:31:56.157 read: IOPS=466, BW=1866KiB/s (1911kB/s)(18.2MiB/10015msec) 00:31:56.157 slat (usec): min=9, max=121, avg=47.07, stdev=18.42 00:31:56.157 clat (usec): min=18147, max=51993, avg=33863.61, stdev=1715.94 00:31:56.157 lat (usec): min=18174, max=52018, avg=33910.68, stdev=1715.13 00:31:56.157 clat percentiles (usec): 00:31:56.157 | 1.00th=[32637], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:31:56.157 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.157 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.157 | 99.00th=[39584], 99.50th=[42730], 99.90th=[52167], 99.95th=[52167], 00:31:56.157 | 99.99th=[52167] 00:31:56.157 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1859.37, stdev=78.31, samples=19 00:31:56.157 iops : min= 416, max= 480, avg=464.84, stdev=19.58, samples=19 00:31:56.157 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:31:56.157 cpu : usr=98.27%, sys=1.30%, ctx=16, majf=0, minf=23 00:31:56.157 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:56.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.157 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.157 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.157 filename2: (groupid=0, jobs=1): err= 0: pid=2571433: Tue Dec 10 04:19:48 2024 00:31:56.157 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.4MiB/10033msec) 00:31:56.157 slat (usec): min=8, max=109, avg=41.29, stdev=24.68 00:31:56.157 clat (usec): min=9058, max=41885, avg=33802.68, stdev=2030.87 00:31:56.157 lat (usec): min=9083, max=41918, avg=33843.97, stdev=2029.38 00:31:56.157 clat percentiles (usec): 00:31:56.157 | 1.00th=[32375], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:31:56.157 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.157 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.157 | 99.00th=[35914], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:56.157 | 99.99th=[41681] 00:31:56.157 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1874.00, stdev=61.96, samples=20 00:31:56.157 iops : min= 448, max= 480, avg=468.50, stdev=15.49, samples=20 00:31:56.157 lat (msec) : 10=0.19%, 20=0.55%, 50=99.26% 00:31:56.157 cpu : usr=98.28%, sys=1.30%, ctx=12, majf=0, minf=33 00:31:56.157 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:56.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.157 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.157 issued rwts: total=4701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.157 filename2: (groupid=0, jobs=1): err= 0: pid=2571434: Tue Dec 10 04:19:48 2024 00:31:56.157 read: IOPS=466, BW=1868KiB/s (1913kB/s)(18.2MiB/10005msec) 00:31:56.157 slat (usec): min=8, max=110, avg=40.41, stdev=16.34 00:31:56.157 clat (usec): min=18603, max=43331, avg=33929.33, stdev=1184.99 00:31:56.157 lat (usec): min=18667, max=43351, avg=33969.73, stdev=1182.95 00:31:56.157 clat percentiles (usec): 00:31:56.157 | 1.00th=[32637], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:31:56.157 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:31:56.157 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:31:56.157 | 99.00th=[35914], 99.50th=[40109], 99.90th=[43254], 99.95th=[43254], 00:31:56.157 | 99.99th=[43254] 00:31:56.157 bw ( KiB/s): min= 1788, max= 1920, per=4.17%, avg=1865.89, stdev=65.19, samples=19 00:31:56.157 iops : min= 447, max= 480, avg=466.47, stdev=16.30, samples=19 00:31:56.157 lat (msec) : 20=0.34%, 50=99.66% 00:31:56.157 cpu : usr=98.11%, sys=1.41%, ctx=35, majf=0, minf=32 00:31:56.157 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:56.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.157 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.157 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.157 filename2: (groupid=0, jobs=1): err= 0: pid=2571435: Tue Dec 10 04:19:48 2024 00:31:56.157 read: IOPS=466, BW=1866KiB/s (1911kB/s)(18.2MiB/10014msec) 00:31:56.157 slat (usec): min=6, max=312, avg=23.79, stdev=18.31 00:31:56.157 clat (usec): min=22063, max=51918, avg=34103.04, stdev=1336.82 00:31:56.157 lat (usec): min=22117, max=51937, avg=34126.83, stdev=1335.21 00:31:56.157 clat percentiles (usec): 00:31:56.157 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:31:56.157 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:31:56.157 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:31:56.157 | 99.00th=[36963], 99.50th=[44827], 99.90th=[45351], 99.95th=[45351], 00:31:56.157 | 99.99th=[52167] 00:31:56.157 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1862.40, stdev=65.33, samples=20 00:31:56.157 iops : min= 448, max= 480, avg=465.60, stdev=16.33, samples=20 00:31:56.157 lat (msec) : 50=99.96%, 100=0.04% 00:31:56.157 cpu : usr=98.48%, sys=1.10%, ctx=12, majf=0, minf=29 00:31:56.157 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:56.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.157 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.157 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:56.157 00:31:56.157 Run status group 0 (all jobs): 00:31:56.157 READ: bw=43.7MiB/s (45.8MB/s), 1860KiB/s-1874KiB/s (1905kB/s-1919kB/s), io=438MiB (460MB), run=10002-10033msec 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:56.157 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.158 bdev_null0 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.158 04:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.158 [2024-12-10 04:19:49.009011] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.158 bdev_null1 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:56.158 { 00:31:56.158 "params": { 00:31:56.158 "name": "Nvme$subsystem", 00:31:56.158 "trtype": "$TEST_TRANSPORT", 00:31:56.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.158 "adrfam": "ipv4", 00:31:56.158 "trsvcid": "$NVMF_PORT", 00:31:56.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.158 "hdgst": ${hdgst:-false}, 00:31:56.158 "ddgst": ${ddgst:-false} 00:31:56.158 }, 00:31:56.158 "method": "bdev_nvme_attach_controller" 00:31:56.158 } 00:31:56.158 EOF 00:31:56.158 )") 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:56.158 { 00:31:56.158 "params": { 00:31:56.158 "name": "Nvme$subsystem", 00:31:56.158 "trtype": "$TEST_TRANSPORT", 00:31:56.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.158 "adrfam": "ipv4", 00:31:56.158 "trsvcid": "$NVMF_PORT", 00:31:56.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.158 "hdgst": ${hdgst:-false}, 00:31:56.158 "ddgst": ${ddgst:-false} 00:31:56.158 }, 00:31:56.158 "method": "bdev_nvme_attach_controller" 00:31:56.158 } 00:31:56.158 EOF 00:31:56.158 )") 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:56.158 "params": { 00:31:56.158 "name": "Nvme0", 00:31:56.158 "trtype": "tcp", 00:31:56.158 "traddr": "10.0.0.2", 00:31:56.158 "adrfam": "ipv4", 00:31:56.158 "trsvcid": "4420", 00:31:56.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:56.158 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:56.158 "hdgst": false, 00:31:56.158 "ddgst": false 00:31:56.158 }, 00:31:56.158 "method": "bdev_nvme_attach_controller" 00:31:56.158 },{ 00:31:56.158 "params": { 00:31:56.158 "name": "Nvme1", 00:31:56.158 "trtype": "tcp", 00:31:56.158 "traddr": "10.0.0.2", 00:31:56.158 "adrfam": "ipv4", 00:31:56.158 "trsvcid": "4420", 00:31:56.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:56.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:56.158 "hdgst": false, 00:31:56.158 "ddgst": false 00:31:56.158 }, 00:31:56.158 "method": "bdev_nvme_attach_controller" 00:31:56.158 }' 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:56.158 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:56.159 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:56.159 ... 00:31:56.159 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:56.159 ... 00:31:56.159 fio-3.35 00:31:56.159 Starting 4 threads 00:32:01.456 00:32:01.456 filename0: (groupid=0, jobs=1): err= 0: pid=2572813: Tue Dec 10 04:19:55 2024 00:32:01.456 read: IOPS=1833, BW=14.3MiB/s (15.0MB/s)(71.7MiB/5004msec) 00:32:01.456 slat (usec): min=4, max=103, avg=25.64, stdev=11.55 00:32:01.456 clat (usec): min=895, max=7758, avg=4271.10, stdev=376.62 00:32:01.456 lat (usec): min=910, max=7795, avg=4296.73, stdev=377.69 00:32:01.456 clat percentiles (usec): 00:32:01.456 | 1.00th=[ 2966], 5.00th=[ 3720], 10.00th=[ 3949], 20.00th=[ 4113], 00:32:01.456 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:32:01.456 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4555], 95.00th=[ 4621], 00:32:01.456 | 99.00th=[ 5211], 99.50th=[ 5669], 99.90th=[ 6980], 99.95th=[ 7308], 00:32:01.456 | 99.99th=[ 7767] 00:32:01.456 bw ( KiB/s): min=14336, max=14864, per=25.21%, avg=14667.20, stdev=165.77, samples=10 00:32:01.456 iops : min= 1792, max= 1858, avg=1833.40, stdev=20.72, samples=10 00:32:01.456 lat (usec) : 1000=0.01% 00:32:01.456 lat (msec) : 2=0.27%, 4=11.81%, 10=87.90% 00:32:01.456 cpu : usr=88.93%, sys=7.04%, ctx=373, majf=0, minf=0 00:32:01.456 IO depths : 1=0.9%, 2=19.1%, 4=54.9%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.457 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.457 issued rwts: total=9175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.457 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:01.457 filename0: (groupid=0, jobs=1): err= 0: pid=2572814: Tue Dec 10 04:19:55 2024 00:32:01.457 read: IOPS=1808, BW=14.1MiB/s (14.8MB/s)(70.7MiB/5002msec) 00:32:01.457 slat (usec): min=7, max=107, avg=25.47, stdev=14.27 00:32:01.457 clat (usec): min=699, max=8014, avg=4324.00, stdev=514.03 00:32:01.457 lat (usec): min=721, max=8028, avg=4349.48, stdev=513.90 00:32:01.457 clat percentiles (usec): 00:32:01.457 | 1.00th=[ 2474], 5.00th=[ 3818], 10.00th=[ 4015], 20.00th=[ 4146], 00:32:01.457 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:32:01.457 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4555], 95.00th=[ 4883], 00:32:01.457 | 99.00th=[ 6521], 99.50th=[ 6915], 99.90th=[ 7439], 99.95th=[ 7635], 00:32:01.457 | 99.99th=[ 8029] 00:32:01.457 bw ( KiB/s): min=14284, max=14704, per=24.91%, avg=14488.44, stdev=148.65, samples=9 00:32:01.457 iops : min= 1785, max= 1838, avg=1811.00, stdev=18.67, samples=9 00:32:01.457 lat (usec) : 750=0.01%, 1000=0.03% 00:32:01.457 lat (msec) : 2=0.45%, 4=8.17%, 10=91.33% 00:32:01.457 cpu : usr=96.14%, sys=3.38%, ctx=6, majf=0, minf=9 00:32:01.457 IO depths : 1=1.2%, 2=20.6%, 4=53.4%, 8=24.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.457 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.457 issued rwts: total=9045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.457 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:01.457 filename1: (groupid=0, jobs=1): err= 0: pid=2572815: Tue Dec 10 04:19:55 2024 00:32:01.457 read: IOPS=1834, BW=14.3MiB/s (15.0MB/s)(71.7MiB/5001msec) 00:32:01.457 slat (nsec): min=7177, max=98628, avg=20370.57, stdev=13332.19 00:32:01.457 clat (usec): min=768, max=7911, avg=4287.60, stdev=390.01 00:32:01.457 lat (usec): min=790, max=7919, avg=4307.97, stdev=391.36 00:32:01.457 clat percentiles (usec): 00:32:01.457 | 1.00th=[ 2868], 5.00th=[ 3720], 10.00th=[ 3949], 20.00th=[ 4146], 00:32:01.457 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4359], 00:32:01.457 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4555], 95.00th=[ 4686], 00:32:01.457 | 99.00th=[ 5342], 99.50th=[ 5735], 99.90th=[ 7635], 99.95th=[ 7767], 00:32:01.457 | 99.99th=[ 7898] 00:32:01.457 bw ( KiB/s): min=14464, max=15024, per=25.26%, avg=14696.89, stdev=233.94, samples=9 00:32:01.457 iops : min= 1808, max= 1878, avg=1837.11, stdev=29.24, samples=9 00:32:01.457 lat (usec) : 1000=0.01% 00:32:01.457 lat (msec) : 2=0.24%, 4=11.52%, 10=88.23% 00:32:01.457 cpu : usr=95.34%, sys=4.18%, ctx=9, majf=0, minf=0 00:32:01.457 IO depths : 1=1.4%, 2=15.0%, 4=57.9%, 8=25.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.457 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.457 issued rwts: total=9174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.457 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:01.457 filename1: (groupid=0, jobs=1): err= 0: pid=2572816: Tue Dec 10 04:19:55 2024 00:32:01.457 read: IOPS=1796, BW=14.0MiB/s (14.7MB/s)(70.2MiB/5003msec) 00:32:01.457 slat (usec): min=7, max=107, avg=26.37, stdev=14.10 00:32:01.457 clat (usec): min=921, max=7953, avg=4348.53, stdev=557.82 00:32:01.457 lat (usec): min=934, max=8011, avg=4374.91, stdev=557.52 00:32:01.457 clat percentiles (usec): 00:32:01.457 | 1.00th=[ 2245], 5.00th=[ 3916], 10.00th=[ 4047], 20.00th=[ 4146], 00:32:01.457 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4359], 00:32:01.457 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 5145], 00:32:01.457 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 7570], 99.95th=[ 7832], 00:32:01.457 | 99.99th=[ 7963] 00:32:01.457 bw ( KiB/s): min=14048, max=14736, per=24.70%, avg=14369.78, stdev=201.66, samples=9 00:32:01.457 iops : min= 1756, max= 1842, avg=1796.22, stdev=25.21, samples=9 00:32:01.457 lat (usec) : 1000=0.04% 00:32:01.457 lat (msec) : 2=0.59%, 4=6.37%, 10=92.99% 00:32:01.457 cpu : usr=95.88%, sys=3.62%, ctx=8, majf=0, minf=9 00:32:01.457 IO depths : 1=0.2%, 2=20.5%, 4=53.3%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.457 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.457 issued rwts: total=8990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.457 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:01.457 00:32:01.457 Run status group 0 (all jobs): 00:32:01.457 READ: bw=56.8MiB/s (59.6MB/s), 14.0MiB/s-14.3MiB/s (14.7MB/s-15.0MB/s), io=284MiB (298MB), run=5001-5004msec 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.457 00:32:01.457 real 0m24.241s 00:32:01.457 user 4m32.596s 00:32:01.457 sys 0m6.492s 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:01.457 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:01.457 ************************************ 00:32:01.457 END TEST fio_dif_rand_params 00:32:01.457 ************************************ 00:32:01.457 04:19:55 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:01.457 04:19:55 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:01.457 04:19:55 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:01.457 04:19:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:01.457 ************************************ 00:32:01.457 START TEST fio_dif_digest 00:32:01.457 ************************************ 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.457 bdev_null0 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.457 [2024-12-10 04:19:55.529364] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:01.457 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:01.458 { 00:32:01.458 "params": { 00:32:01.458 "name": "Nvme$subsystem", 00:32:01.458 "trtype": "$TEST_TRANSPORT", 00:32:01.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:01.458 "adrfam": "ipv4", 00:32:01.458 "trsvcid": "$NVMF_PORT", 00:32:01.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:01.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:01.458 "hdgst": ${hdgst:-false}, 00:32:01.458 "ddgst": ${ddgst:-false} 00:32:01.458 }, 00:32:01.458 "method": "bdev_nvme_attach_controller" 00:32:01.458 } 00:32:01.458 EOF 00:32:01.458 )") 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:01.458 "params": { 00:32:01.458 "name": "Nvme0", 00:32:01.458 "trtype": "tcp", 00:32:01.458 "traddr": "10.0.0.2", 00:32:01.458 "adrfam": "ipv4", 00:32:01.458 "trsvcid": "4420", 00:32:01.458 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:01.458 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:01.458 "hdgst": true, 00:32:01.458 "ddgst": true 00:32:01.458 }, 00:32:01.458 "method": "bdev_nvme_attach_controller" 00:32:01.458 }' 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:01.458 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:01.458 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:01.458 ... 00:32:01.458 fio-3.35 00:32:01.458 Starting 3 threads 00:32:13.676 00:32:13.676 filename0: (groupid=0, jobs=1): err= 0: pid=2573571: Tue Dec 10 04:20:06 2024 00:32:13.676 read: IOPS=204, BW=25.5MiB/s (26.8MB/s)(257MiB/10046msec) 00:32:13.676 slat (nsec): min=4214, max=38137, avg=15477.66, stdev=4603.75 00:32:13.676 clat (usec): min=11202, max=46291, avg=14626.61, stdev=1211.44 00:32:13.676 lat (usec): min=11216, max=46306, avg=14642.09, stdev=1211.26 00:32:13.676 clat percentiles (usec): 00:32:13.676 | 1.00th=[12256], 5.00th=[12911], 10.00th=[13304], 20.00th=[13829], 00:32:13.676 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:32:13.676 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15795], 95.00th=[16188], 00:32:13.676 | 99.00th=[16909], 99.50th=[17433], 99.90th=[21103], 99.95th=[21103], 00:32:13.676 | 99.99th=[46400] 00:32:13.676 bw ( KiB/s): min=25344, max=27648, per=34.79%, avg=26227.20, stdev=566.22, samples=20 00:32:13.676 iops : min= 198, max= 216, avg=204.90, stdev= 4.42, samples=20 00:32:13.676 lat (msec) : 20=99.81%, 50=0.19% 00:32:13.676 cpu : usr=91.64%, sys=6.81%, ctx=264, majf=0, minf=138 00:32:13.676 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.676 issued rwts: total=2052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.676 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:13.676 filename0: (groupid=0, jobs=1): err= 0: pid=2573572: Tue Dec 10 04:20:06 2024 00:32:13.676 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(243MiB/10046msec) 00:32:13.676 slat (nsec): min=4597, max=41595, avg=15597.91, stdev=3290.92 00:32:13.676 clat (usec): min=12136, max=48586, avg=15453.39, stdev=1447.32 00:32:13.676 lat (usec): min=12151, max=48604, avg=15468.98, stdev=1447.17 00:32:13.676 clat percentiles (usec): 00:32:13.676 | 1.00th=[13042], 5.00th=[13829], 10.00th=[14222], 20.00th=[14615], 00:32:13.676 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:32:13.676 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:32:13.676 | 99.00th=[17957], 99.50th=[18482], 99.90th=[46924], 99.95th=[48497], 00:32:13.676 | 99.99th=[48497] 00:32:13.676 bw ( KiB/s): min=23552, max=26112, per=32.99%, avg=24870.40, stdev=546.38, samples=20 00:32:13.676 iops : min= 184, max= 204, avg=194.30, stdev= 4.27, samples=20 00:32:13.676 lat (msec) : 20=99.74%, 50=0.26% 00:32:13.676 cpu : usr=91.83%, sys=6.51%, ctx=160, majf=0, minf=115 00:32:13.676 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.676 issued rwts: total=1945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.676 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:13.676 filename0: (groupid=0, jobs=1): err= 0: pid=2573573: Tue Dec 10 04:20:06 2024 00:32:13.676 read: IOPS=191, BW=23.9MiB/s (25.1MB/s)(240MiB/10045msec) 00:32:13.676 slat (nsec): min=4973, max=46058, avg=15588.40, stdev=3368.35 00:32:13.676 clat (usec): min=12329, max=54322, avg=15653.50, stdev=1556.52 00:32:13.676 lat (usec): min=12344, max=54336, avg=15669.09, stdev=1556.41 00:32:13.676 clat percentiles (usec): 00:32:13.676 | 1.00th=[13304], 5.00th=[14091], 10.00th=[14484], 20.00th=[14877], 00:32:13.676 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15533], 60.00th=[15795], 00:32:13.676 | 70.00th=[16057], 80.00th=[16319], 90.00th=[16909], 95.00th=[17171], 00:32:13.676 | 99.00th=[17957], 99.50th=[18482], 99.90th=[51643], 99.95th=[54264], 00:32:13.676 | 99.99th=[54264] 00:32:13.676 bw ( KiB/s): min=24064, max=25088, per=32.56%, avg=24550.40, stdev=298.31, samples=20 00:32:13.676 iops : min= 188, max= 196, avg=191.80, stdev= 2.33, samples=20 00:32:13.676 lat (msec) : 20=99.74%, 50=0.16%, 100=0.10% 00:32:13.676 cpu : usr=92.83%, sys=6.21%, ctx=71, majf=0, minf=126 00:32:13.676 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.676 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.676 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:13.676 00:32:13.676 Run status group 0 (all jobs): 00:32:13.676 READ: bw=73.6MiB/s (77.2MB/s), 23.9MiB/s-25.5MiB/s (25.1MB/s-26.8MB/s), io=740MiB (776MB), run=10045-10046msec 00:32:13.677 04:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:13.677 04:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:13.677 04:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:13.677 04:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:13.677 04:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:13.677 04:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:13.677 04:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.677 04:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:13.677 04:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.677 04:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:13.677 04:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.677 04:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:13.677 04:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.677 00:32:13.677 real 0m11.076s 00:32:13.677 user 0m28.671s 00:32:13.677 sys 0m2.231s 00:32:13.677 04:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:13.677 04:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:13.677 ************************************ 00:32:13.677 END TEST fio_dif_digest 00:32:13.677 ************************************ 00:32:13.677 04:20:06 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:13.677 04:20:06 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:13.677 04:20:06 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:13.677 04:20:06 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:32:13.677 04:20:06 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:13.677 04:20:06 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:32:13.677 04:20:06 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:13.677 04:20:06 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:13.677 rmmod nvme_tcp 00:32:13.677 rmmod nvme_fabrics 00:32:13.677 rmmod nvme_keyring 00:32:13.677 04:20:06 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:13.677 04:20:06 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:32:13.677 04:20:06 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:32:13.677 04:20:06 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2567519 ']' 00:32:13.677 04:20:06 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2567519 00:32:13.677 04:20:06 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2567519 ']' 00:32:13.677 04:20:06 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2567519 00:32:13.677 04:20:06 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:32:13.677 04:20:06 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:13.677 04:20:06 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2567519 00:32:13.677 04:20:06 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:13.677 04:20:06 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:13.677 04:20:06 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2567519' 00:32:13.677 killing process with pid 2567519 00:32:13.677 04:20:06 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2567519 00:32:13.677 04:20:06 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2567519 00:32:13.677 04:20:06 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:13.677 04:20:06 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:13.677 Waiting for block devices as requested 00:32:13.677 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:13.677 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:13.936 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:13.936 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:13.936 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:14.195 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:14.195 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:14.195 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:14.453 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:14.453 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:14.453 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:14.453 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:14.713 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:14.713 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:14.713 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:14.713 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:14.971 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:14.971 04:20:09 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:14.971 04:20:09 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:14.971 04:20:09 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:32:14.971 04:20:09 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:32:14.971 04:20:09 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:14.971 04:20:09 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:32:14.971 04:20:09 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:14.971 04:20:09 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:14.971 04:20:09 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.971 04:20:09 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:14.971 04:20:09 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.503 04:20:11 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:17.503 00:32:17.503 real 1m6.990s 00:32:17.503 user 6m29.000s 00:32:17.503 sys 0m18.044s 00:32:17.503 04:20:11 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:17.503 04:20:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:17.503 ************************************ 00:32:17.503 END TEST nvmf_dif 00:32:17.503 ************************************ 00:32:17.503 04:20:11 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:17.503 04:20:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:17.503 04:20:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:17.503 04:20:11 -- common/autotest_common.sh@10 -- # set +x 00:32:17.503 ************************************ 00:32:17.503 START TEST nvmf_abort_qd_sizes 00:32:17.503 ************************************ 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:17.503 * Looking for test storage... 00:32:17.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:17.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.503 --rc genhtml_branch_coverage=1 00:32:17.503 --rc genhtml_function_coverage=1 00:32:17.503 --rc genhtml_legend=1 00:32:17.503 --rc geninfo_all_blocks=1 00:32:17.503 --rc geninfo_unexecuted_blocks=1 00:32:17.503 00:32:17.503 ' 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:17.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.503 --rc genhtml_branch_coverage=1 00:32:17.503 --rc genhtml_function_coverage=1 00:32:17.503 --rc genhtml_legend=1 00:32:17.503 --rc geninfo_all_blocks=1 00:32:17.503 --rc geninfo_unexecuted_blocks=1 00:32:17.503 00:32:17.503 ' 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:17.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.503 --rc genhtml_branch_coverage=1 00:32:17.503 --rc genhtml_function_coverage=1 00:32:17.503 --rc genhtml_legend=1 00:32:17.503 --rc geninfo_all_blocks=1 00:32:17.503 --rc geninfo_unexecuted_blocks=1 00:32:17.503 00:32:17.503 ' 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:17.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.503 --rc genhtml_branch_coverage=1 00:32:17.503 --rc genhtml_function_coverage=1 00:32:17.503 --rc genhtml_legend=1 00:32:17.503 --rc geninfo_all_blocks=1 00:32:17.503 --rc geninfo_unexecuted_blocks=1 00:32:17.503 00:32:17.503 ' 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:17.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.503 04:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:17.504 04:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.504 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:17.504 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:17.504 04:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:32:17.504 04:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:19.406 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:19.406 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:19.406 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.406 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:19.406 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:19.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:19.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:32:19.407 00:32:19.407 --- 10.0.0.2 ping statistics --- 00:32:19.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.407 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:19.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:19.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:32:19.407 00:32:19.407 --- 10.0.0.1 ping statistics --- 00:32:19.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.407 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:19.407 04:20:13 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:20.783 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:20.783 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:20.783 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:20.783 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:20.783 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:20.783 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:20.783 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:20.783 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:20.783 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:20.783 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:20.783 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:20.783 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:20.783 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:20.783 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:20.783 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:20.783 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:21.715 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2578488 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2578488 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2578488 ']' 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:21.973 04:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:21.973 [2024-12-10 04:20:16.276389] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:32:21.973 [2024-12-10 04:20:16.276467] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:21.973 [2024-12-10 04:20:16.349276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:22.231 [2024-12-10 04:20:16.408772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:22.231 [2024-12-10 04:20:16.408849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:22.231 [2024-12-10 04:20:16.408866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:22.231 [2024-12-10 04:20:16.408877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:22.231 [2024-12-10 04:20:16.408886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:22.231 [2024-12-10 04:20:16.412565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.231 [2024-12-10 04:20:16.412630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:22.231 [2024-12-10 04:20:16.412697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:22.231 [2024-12-10 04:20:16.412700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:22.231 04:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:22.231 ************************************ 00:32:22.231 START TEST spdk_target_abort 00:32:22.231 ************************************ 00:32:22.231 04:20:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:32:22.231 04:20:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:22.231 04:20:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:32:22.231 04:20:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.231 04:20:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:25.508 spdk_targetn1 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:25.508 [2024-12-10 04:20:19.446302] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:25.508 [2024-12-10 04:20:19.498688] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:25.508 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:25.509 04:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:28.787 Initializing NVMe Controllers 00:32:28.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:28.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:28.787 Initialization complete. Launching workers. 00:32:28.787 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12909, failed: 0 00:32:28.787 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1200, failed to submit 11709 00:32:28.787 success 773, unsuccessful 427, failed 0 00:32:28.787 04:20:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:28.787 04:20:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:32.063 Initializing NVMe Controllers 00:32:32.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:32.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:32.063 Initialization complete. Launching workers. 00:32:32.063 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8486, failed: 0 00:32:32.064 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1232, failed to submit 7254 00:32:32.064 success 326, unsuccessful 906, failed 0 00:32:32.064 04:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:32.064 04:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:35.341 Initializing NVMe Controllers 00:32:35.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:35.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:35.341 Initialization complete. Launching workers. 00:32:35.341 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31263, failed: 0 00:32:35.341 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2686, failed to submit 28577 00:32:35.341 success 530, unsuccessful 2156, failed 0 00:32:35.341 04:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:35.341 04:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.341 04:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:35.341 04:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.341 04:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:35.341 04:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.341 04:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:36.274 04:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.274 04:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2578488 00:32:36.274 04:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2578488 ']' 00:32:36.274 04:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2578488 00:32:36.274 04:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:32:36.274 04:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:36.274 04:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2578488 00:32:36.533 04:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:36.533 04:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:36.533 04:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2578488' 00:32:36.533 killing process with pid 2578488 00:32:36.533 04:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2578488 00:32:36.533 04:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2578488 00:32:36.533 00:32:36.533 real 0m14.312s 00:32:36.533 user 0m54.212s 00:32:36.533 sys 0m2.646s 00:32:36.533 04:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:36.533 04:20:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:36.533 ************************************ 00:32:36.533 END TEST spdk_target_abort 00:32:36.533 ************************************ 00:32:36.791 04:20:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:36.791 04:20:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:36.791 04:20:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:36.791 04:20:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:36.791 ************************************ 00:32:36.791 START TEST kernel_target_abort 00:32:36.791 ************************************ 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:36.791 04:20:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:38.164 Waiting for block devices as requested 00:32:38.164 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:38.164 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:38.164 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:38.164 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:38.423 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:38.423 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:38.423 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:38.423 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:38.681 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:38.681 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:38.681 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:38.939 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:38.939 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:38.939 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:38.939 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:39.199 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:39.199 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:39.199 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:39.199 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:39.199 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:32:39.199 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:32:39.199 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:39.199 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:39.199 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:32:39.199 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:39.199 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:39.199 No valid GPT data, bailing 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:39.458 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:39.458 00:32:39.458 Discovery Log Number of Records 2, Generation counter 2 00:32:39.459 =====Discovery Log Entry 0====== 00:32:39.459 trtype: tcp 00:32:39.459 adrfam: ipv4 00:32:39.459 subtype: current discovery subsystem 00:32:39.459 treq: not specified, sq flow control disable supported 00:32:39.459 portid: 1 00:32:39.459 trsvcid: 4420 00:32:39.459 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:39.459 traddr: 10.0.0.1 00:32:39.459 eflags: none 00:32:39.459 sectype: none 00:32:39.459 =====Discovery Log Entry 1====== 00:32:39.459 trtype: tcp 00:32:39.459 adrfam: ipv4 00:32:39.459 subtype: nvme subsystem 00:32:39.459 treq: not specified, sq flow control disable supported 00:32:39.459 portid: 1 00:32:39.459 trsvcid: 4420 00:32:39.459 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:39.459 traddr: 10.0.0.1 00:32:39.459 eflags: none 00:32:39.459 sectype: none 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:39.459 04:20:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:42.738 Initializing NVMe Controllers 00:32:42.738 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:42.738 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:42.738 Initialization complete. Launching workers. 00:32:42.738 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56195, failed: 0 00:32:42.738 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56195, failed to submit 0 00:32:42.738 success 0, unsuccessful 56195, failed 0 00:32:42.738 04:20:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:42.738 04:20:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:46.016 Initializing NVMe Controllers 00:32:46.016 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:46.016 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:46.016 Initialization complete. Launching workers. 00:32:46.016 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 102452, failed: 0 00:32:46.016 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25814, failed to submit 76638 00:32:46.016 success 0, unsuccessful 25814, failed 0 00:32:46.016 04:20:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:46.016 04:20:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:49.296 Initializing NVMe Controllers 00:32:49.296 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:49.296 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:49.296 Initialization complete. Launching workers. 00:32:49.296 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97156, failed: 0 00:32:49.296 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24294, failed to submit 72862 00:32:49.296 success 0, unsuccessful 24294, failed 0 00:32:49.296 04:20:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:49.296 04:20:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:49.296 04:20:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:32:49.296 04:20:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:49.296 04:20:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:49.296 04:20:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:49.296 04:20:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:49.296 04:20:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:49.296 04:20:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:32:49.296 04:20:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:50.229 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:50.229 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:50.229 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:50.229 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:50.229 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:50.229 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:50.229 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:50.229 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:50.229 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:50.229 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:50.229 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:50.229 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:50.229 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:50.229 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:50.229 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:50.229 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:51.169 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:51.169 00:32:51.169 real 0m14.587s 00:32:51.169 user 0m6.765s 00:32:51.169 sys 0m3.346s 00:32:51.169 04:20:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:51.169 04:20:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:51.169 ************************************ 00:32:51.169 END TEST kernel_target_abort 00:32:51.169 ************************************ 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:51.463 rmmod nvme_tcp 00:32:51.463 rmmod nvme_fabrics 00:32:51.463 rmmod nvme_keyring 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2578488 ']' 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2578488 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2578488 ']' 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2578488 00:32:51.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2578488) - No such process 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2578488 is not found' 00:32:51.463 Process with pid 2578488 is not found 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:51.463 04:20:45 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:52.426 Waiting for block devices as requested 00:32:52.426 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:52.685 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:52.685 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:52.944 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:52.944 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:52.944 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:52.944 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:53.202 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:53.202 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:53.202 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:53.202 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:53.461 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:53.461 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:53.461 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:53.719 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:53.719 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:53.719 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:53.977 04:20:48 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:53.978 04:20:48 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:53.978 04:20:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:32:53.978 04:20:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:32:53.978 04:20:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:53.978 04:20:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:32:53.978 04:20:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:53.978 04:20:48 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:53.978 04:20:48 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.978 04:20:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:53.978 04:20:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:55.883 04:20:50 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:55.883 00:32:55.883 real 0m38.818s 00:32:55.883 user 1m3.209s 00:32:55.883 sys 0m9.738s 00:32:55.883 04:20:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:55.883 04:20:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:55.883 ************************************ 00:32:55.883 END TEST nvmf_abort_qd_sizes 00:32:55.883 ************************************ 00:32:55.883 04:20:50 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:55.883 04:20:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:55.883 04:20:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:55.883 04:20:50 -- common/autotest_common.sh@10 -- # set +x 00:32:55.883 ************************************ 00:32:55.883 START TEST keyring_file 00:32:55.883 ************************************ 00:32:55.883 04:20:50 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:56.142 * Looking for test storage... 00:32:56.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:56.142 04:20:50 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:56.142 04:20:50 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:32:56.142 04:20:50 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:56.142 04:20:50 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@345 -- # : 1 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@353 -- # local d=1 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@355 -- # echo 1 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@353 -- # local d=2 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@355 -- # echo 2 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@368 -- # return 0 00:32:56.142 04:20:50 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:56.142 04:20:50 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:56.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.142 --rc genhtml_branch_coverage=1 00:32:56.142 --rc genhtml_function_coverage=1 00:32:56.142 --rc genhtml_legend=1 00:32:56.142 --rc geninfo_all_blocks=1 00:32:56.142 --rc geninfo_unexecuted_blocks=1 00:32:56.142 00:32:56.142 ' 00:32:56.142 04:20:50 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:56.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.142 --rc genhtml_branch_coverage=1 00:32:56.142 --rc genhtml_function_coverage=1 00:32:56.142 --rc genhtml_legend=1 00:32:56.142 --rc geninfo_all_blocks=1 00:32:56.142 --rc geninfo_unexecuted_blocks=1 00:32:56.142 00:32:56.142 ' 00:32:56.142 04:20:50 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:56.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.142 --rc genhtml_branch_coverage=1 00:32:56.142 --rc genhtml_function_coverage=1 00:32:56.142 --rc genhtml_legend=1 00:32:56.142 --rc geninfo_all_blocks=1 00:32:56.142 --rc geninfo_unexecuted_blocks=1 00:32:56.142 00:32:56.142 ' 00:32:56.142 04:20:50 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:56.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.142 --rc genhtml_branch_coverage=1 00:32:56.142 --rc genhtml_function_coverage=1 00:32:56.142 --rc genhtml_legend=1 00:32:56.142 --rc geninfo_all_blocks=1 00:32:56.142 --rc geninfo_unexecuted_blocks=1 00:32:56.142 00:32:56.142 ' 00:32:56.142 04:20:50 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:56.142 04:20:50 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:56.142 04:20:50 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:56.142 04:20:50 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:56.142 04:20:50 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.142 04:20:50 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.143 04:20:50 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.143 04:20:50 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:56.143 04:20:50 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@51 -- # : 0 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:56.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:56.143 04:20:50 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:56.143 04:20:50 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:56.143 04:20:50 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:56.143 04:20:50 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:56.143 04:20:50 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:56.143 04:20:50 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mTRIinYe33 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mTRIinYe33 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mTRIinYe33 00:32:56.143 04:20:50 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.mTRIinYe33 00:32:56.143 04:20:50 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6bYtHCSXes 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:56.143 04:20:50 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6bYtHCSXes 00:32:56.143 04:20:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6bYtHCSXes 00:32:56.143 04:20:50 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.6bYtHCSXes 00:32:56.143 04:20:50 keyring_file -- keyring/file.sh@30 -- # tgtpid=2584267 00:32:56.143 04:20:50 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:56.143 04:20:50 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2584267 00:32:56.143 04:20:50 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2584267 ']' 00:32:56.143 04:20:50 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.143 04:20:50 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:56.143 04:20:50 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.143 04:20:50 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:56.143 04:20:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:56.143 [2024-12-10 04:20:50.517296] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:32:56.143 [2024-12-10 04:20:50.517376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2584267 ] 00:32:56.402 [2024-12-10 04:20:50.586115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.402 [2024-12-10 04:20:50.646750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:56.661 04:20:50 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:56.661 [2024-12-10 04:20:50.907759] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.661 null0 00:32:56.661 [2024-12-10 04:20:50.939810] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:56.661 [2024-12-10 04:20:50.940208] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.661 04:20:50 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:56.661 [2024-12-10 04:20:50.963867] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:56.661 request: 00:32:56.661 { 00:32:56.661 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:56.661 "secure_channel": false, 00:32:56.661 "listen_address": { 00:32:56.661 "trtype": "tcp", 00:32:56.661 "traddr": "127.0.0.1", 00:32:56.661 "trsvcid": "4420" 00:32:56.661 }, 00:32:56.661 "method": "nvmf_subsystem_add_listener", 00:32:56.661 "req_id": 1 00:32:56.661 } 00:32:56.661 Got JSON-RPC error response 00:32:56.661 response: 00:32:56.661 { 00:32:56.661 "code": -32602, 00:32:56.661 "message": "Invalid parameters" 00:32:56.661 } 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:56.661 04:20:50 keyring_file -- keyring/file.sh@47 -- # bperfpid=2584272 00:32:56.661 04:20:50 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:56.661 04:20:50 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2584272 /var/tmp/bperf.sock 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2584272 ']' 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:56.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:56.661 04:20:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:56.661 [2024-12-10 04:20:51.011080] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:32:56.661 [2024-12-10 04:20:51.011151] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2584272 ] 00:32:56.919 [2024-12-10 04:20:51.076288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.919 [2024-12-10 04:20:51.139878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.919 04:20:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:56.919 04:20:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:56.919 04:20:51 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mTRIinYe33 00:32:56.919 04:20:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mTRIinYe33 00:32:57.177 04:20:51 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6bYtHCSXes 00:32:57.177 04:20:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6bYtHCSXes 00:32:57.435 04:20:51 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:32:57.435 04:20:51 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:57.435 04:20:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:57.435 04:20:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.435 04:20:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:57.693 04:20:52 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.mTRIinYe33 == \/\t\m\p\/\t\m\p\.\m\T\R\I\i\n\Y\e\3\3 ]] 00:32:57.693 04:20:52 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:32:57.693 04:20:52 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:32:57.693 04:20:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:57.693 04:20:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.693 04:20:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:58.259 04:20:52 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.6bYtHCSXes == \/\t\m\p\/\t\m\p\.\6\b\Y\t\H\C\S\X\e\s ]] 00:32:58.259 04:20:52 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:32:58.259 04:20:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:58.259 04:20:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:58.259 04:20:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:58.259 04:20:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:58.259 04:20:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:58.259 04:20:52 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:58.259 04:20:52 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:32:58.259 04:20:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:58.259 04:20:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:58.259 04:20:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:58.259 04:20:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:58.259 04:20:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:58.518 04:20:52 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:32:58.518 04:20:52 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:58.518 04:20:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:58.775 [2024-12-10 04:20:53.134847] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:59.035 nvme0n1 00:32:59.035 04:20:53 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:32:59.035 04:20:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:59.035 04:20:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:59.035 04:20:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:59.035 04:20:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:59.035 04:20:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:59.293 04:20:53 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:32:59.293 04:20:53 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:32:59.293 04:20:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:59.293 04:20:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:59.293 04:20:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:59.293 04:20:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:59.293 04:20:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:59.551 04:20:53 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:32:59.551 04:20:53 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:59.551 Running I/O for 1 seconds... 00:33:00.924 10553.00 IOPS, 41.22 MiB/s 00:33:00.924 Latency(us) 00:33:00.924 [2024-12-10T03:20:55.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.924 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:00.924 nvme0n1 : 1.01 10603.59 41.42 0.00 0.00 12037.52 4490.43 18252.99 00:33:00.924 [2024-12-10T03:20:55.313Z] =================================================================================================================== 00:33:00.924 [2024-12-10T03:20:55.313Z] Total : 10603.59 41.42 0.00 0.00 12037.52 4490.43 18252.99 00:33:00.924 { 00:33:00.924 "results": [ 00:33:00.924 { 00:33:00.924 "job": "nvme0n1", 00:33:00.924 "core_mask": "0x2", 00:33:00.924 "workload": "randrw", 00:33:00.924 "percentage": 50, 00:33:00.924 "status": "finished", 00:33:00.924 "queue_depth": 128, 00:33:00.924 "io_size": 4096, 00:33:00.924 "runtime": 1.0073, 00:33:00.924 "iops": 10603.593765511765, 00:33:00.924 "mibps": 41.42028814653033, 00:33:00.924 "io_failed": 0, 00:33:00.924 "io_timeout": 0, 00:33:00.924 "avg_latency_us": 12037.515184526348, 00:33:00.924 "min_latency_us": 4490.42962962963, 00:33:00.924 "max_latency_us": 18252.98962962963 00:33:00.924 } 00:33:00.924 ], 00:33:00.924 "core_count": 1 00:33:00.924 } 00:33:00.924 04:20:54 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:00.924 04:20:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:00.924 04:20:55 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:33:00.924 04:20:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:00.924 04:20:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:00.924 04:20:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:00.924 04:20:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:00.924 04:20:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:01.182 04:20:55 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:01.182 04:20:55 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:33:01.182 04:20:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:01.182 04:20:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:01.182 04:20:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:01.182 04:20:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:01.182 04:20:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:01.440 04:20:55 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:33:01.440 04:20:55 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:01.440 04:20:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:01.440 04:20:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:01.440 04:20:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:01.440 04:20:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:01.440 04:20:55 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:01.440 04:20:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:01.440 04:20:55 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:01.440 04:20:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:01.698 [2024-12-10 04:20:55.997742] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:01.698 [2024-12-10 04:20:55.998471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb4170 (107): Transport endpoint is not connected 00:33:01.698 [2024-12-10 04:20:55.999464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb4170 (9): Bad file descriptor 00:33:01.698 [2024-12-10 04:20:56.000464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:01.698 [2024-12-10 04:20:56.000483] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:01.698 [2024-12-10 04:20:56.000496] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:01.698 [2024-12-10 04:20:56.000510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:01.698 request: 00:33:01.698 { 00:33:01.698 "name": "nvme0", 00:33:01.698 "trtype": "tcp", 00:33:01.698 "traddr": "127.0.0.1", 00:33:01.698 "adrfam": "ipv4", 00:33:01.698 "trsvcid": "4420", 00:33:01.698 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:01.698 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:01.698 "prchk_reftag": false, 00:33:01.698 "prchk_guard": false, 00:33:01.698 "hdgst": false, 00:33:01.698 "ddgst": false, 00:33:01.698 "psk": "key1", 00:33:01.698 "allow_unrecognized_csi": false, 00:33:01.698 "method": "bdev_nvme_attach_controller", 00:33:01.698 "req_id": 1 00:33:01.698 } 00:33:01.698 Got JSON-RPC error response 00:33:01.698 response: 00:33:01.698 { 00:33:01.698 "code": -5, 00:33:01.698 "message": "Input/output error" 00:33:01.698 } 00:33:01.698 04:20:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:01.698 04:20:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:01.698 04:20:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:01.698 04:20:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:01.698 04:20:56 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:33:01.698 04:20:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:01.699 04:20:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:01.699 04:20:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:01.699 04:20:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:01.699 04:20:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:01.956 04:20:56 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:01.956 04:20:56 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:33:01.956 04:20:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:01.956 04:20:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:01.956 04:20:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:01.956 04:20:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:01.956 04:20:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:02.214 04:20:56 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:33:02.214 04:20:56 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:33:02.214 04:20:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:02.472 04:20:56 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:33:02.472 04:20:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:02.729 04:20:57 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:33:02.729 04:20:57 keyring_file -- keyring/file.sh@78 -- # jq length 00:33:02.729 04:20:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:03.295 04:20:57 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:33:03.295 04:20:57 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.mTRIinYe33 00:33:03.295 04:20:57 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.mTRIinYe33 00:33:03.295 04:20:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:03.295 04:20:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.mTRIinYe33 00:33:03.295 04:20:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:03.295 04:20:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.295 04:20:57 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:03.295 04:20:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.296 04:20:57 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mTRIinYe33 00:33:03.296 04:20:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mTRIinYe33 00:33:03.296 [2024-12-10 04:20:57.639621] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.mTRIinYe33': 0100660 00:33:03.296 [2024-12-10 04:20:57.639655] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:03.296 request: 00:33:03.296 { 00:33:03.296 "name": "key0", 00:33:03.296 "path": "/tmp/tmp.mTRIinYe33", 00:33:03.296 "method": "keyring_file_add_key", 00:33:03.296 "req_id": 1 00:33:03.296 } 00:33:03.296 Got JSON-RPC error response 00:33:03.296 response: 00:33:03.296 { 00:33:03.296 "code": -1, 00:33:03.296 "message": "Operation not permitted" 00:33:03.296 } 00:33:03.296 04:20:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:03.296 04:20:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:03.296 04:20:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:03.296 04:20:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:03.296 04:20:57 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.mTRIinYe33 00:33:03.296 04:20:57 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mTRIinYe33 00:33:03.296 04:20:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mTRIinYe33 00:33:03.554 04:20:57 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.mTRIinYe33 00:33:03.554 04:20:57 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:33:03.554 04:20:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:03.554 04:20:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:03.812 04:20:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:03.812 04:20:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:03.812 04:20:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:04.069 04:20:58 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:33:04.070 04:20:58 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:04.070 04:20:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:04.070 04:20:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:04.070 04:20:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:04.070 04:20:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:04.070 04:20:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:04.070 04:20:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:04.070 04:20:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:04.070 04:20:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:04.327 [2024-12-10 04:20:58.469858] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.mTRIinYe33': No such file or directory 00:33:04.327 [2024-12-10 04:20:58.469892] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:04.327 [2024-12-10 04:20:58.469920] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:04.328 [2024-12-10 04:20:58.469933] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:33:04.328 [2024-12-10 04:20:58.469945] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:04.328 [2024-12-10 04:20:58.469955] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:04.328 request: 00:33:04.328 { 00:33:04.328 "name": "nvme0", 00:33:04.328 "trtype": "tcp", 00:33:04.328 "traddr": "127.0.0.1", 00:33:04.328 "adrfam": "ipv4", 00:33:04.328 "trsvcid": "4420", 00:33:04.328 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:04.328 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:04.328 "prchk_reftag": false, 00:33:04.328 "prchk_guard": false, 00:33:04.328 "hdgst": false, 00:33:04.328 "ddgst": false, 00:33:04.328 "psk": "key0", 00:33:04.328 "allow_unrecognized_csi": false, 00:33:04.328 "method": "bdev_nvme_attach_controller", 00:33:04.328 "req_id": 1 00:33:04.328 } 00:33:04.328 Got JSON-RPC error response 00:33:04.328 response: 00:33:04.328 { 00:33:04.328 "code": -19, 00:33:04.328 "message": "No such device" 00:33:04.328 } 00:33:04.328 04:20:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:04.328 04:20:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:04.328 04:20:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:04.328 04:20:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:04.328 04:20:58 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:33:04.328 04:20:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:04.585 04:20:58 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:04.585 04:20:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:04.585 04:20:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:04.585 04:20:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:04.586 04:20:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:04.586 04:20:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:04.586 04:20:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7lIF2FkYyG 00:33:04.586 04:20:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:04.586 04:20:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:04.586 04:20:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:04.586 04:20:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:04.586 04:20:58 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:04.586 04:20:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:04.586 04:20:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:04.586 04:20:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7lIF2FkYyG 00:33:04.586 04:20:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7lIF2FkYyG 00:33:04.586 04:20:58 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.7lIF2FkYyG 00:33:04.586 04:20:58 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7lIF2FkYyG 00:33:04.586 04:20:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7lIF2FkYyG 00:33:04.844 04:20:59 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:04.844 04:20:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:05.102 nvme0n1 00:33:05.102 04:20:59 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:33:05.102 04:20:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:05.102 04:20:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:05.102 04:20:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:05.102 04:20:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:05.102 04:20:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:05.359 04:20:59 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:33:05.359 04:20:59 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:33:05.359 04:20:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:05.617 04:20:59 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:33:05.617 04:20:59 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:33:05.617 04:20:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:05.617 04:20:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:05.617 04:20:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:05.875 04:21:00 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:33:05.875 04:21:00 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:33:05.875 04:21:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:05.875 04:21:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:05.875 04:21:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:05.875 04:21:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:05.875 04:21:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:06.440 04:21:00 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:33:06.440 04:21:00 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:06.440 04:21:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:06.698 04:21:00 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:33:06.698 04:21:00 keyring_file -- keyring/file.sh@105 -- # jq length 00:33:06.698 04:21:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:06.956 04:21:01 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:33:06.956 04:21:01 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7lIF2FkYyG 00:33:06.956 04:21:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7lIF2FkYyG 00:33:07.213 04:21:01 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6bYtHCSXes 00:33:07.213 04:21:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6bYtHCSXes 00:33:07.471 04:21:01 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:07.471 04:21:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:07.729 nvme0n1 00:33:07.729 04:21:02 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:33:07.729 04:21:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:08.295 04:21:02 keyring_file -- keyring/file.sh@113 -- # config='{ 00:33:08.295 "subsystems": [ 00:33:08.295 { 00:33:08.295 "subsystem": "keyring", 00:33:08.295 "config": [ 00:33:08.295 { 00:33:08.295 "method": "keyring_file_add_key", 00:33:08.295 "params": { 00:33:08.295 "name": "key0", 00:33:08.295 "path": "/tmp/tmp.7lIF2FkYyG" 00:33:08.295 } 00:33:08.295 }, 00:33:08.295 { 00:33:08.295 "method": "keyring_file_add_key", 00:33:08.295 "params": { 00:33:08.295 "name": "key1", 00:33:08.295 "path": "/tmp/tmp.6bYtHCSXes" 00:33:08.295 } 00:33:08.295 } 00:33:08.295 ] 00:33:08.295 }, 00:33:08.295 { 00:33:08.295 "subsystem": "iobuf", 00:33:08.295 "config": [ 00:33:08.295 { 00:33:08.295 "method": "iobuf_set_options", 00:33:08.295 "params": { 00:33:08.295 "small_pool_count": 8192, 00:33:08.295 "large_pool_count": 1024, 00:33:08.295 "small_bufsize": 8192, 00:33:08.295 "large_bufsize": 135168, 00:33:08.295 "enable_numa": false 00:33:08.295 } 00:33:08.295 } 00:33:08.295 ] 00:33:08.295 }, 00:33:08.295 { 00:33:08.295 "subsystem": "sock", 00:33:08.295 "config": [ 00:33:08.295 { 00:33:08.295 "method": "sock_set_default_impl", 00:33:08.295 "params": { 00:33:08.295 "impl_name": "posix" 00:33:08.295 } 00:33:08.295 }, 00:33:08.295 { 00:33:08.295 "method": "sock_impl_set_options", 00:33:08.295 "params": { 00:33:08.295 "impl_name": "ssl", 00:33:08.295 "recv_buf_size": 4096, 00:33:08.295 "send_buf_size": 4096, 00:33:08.295 "enable_recv_pipe": true, 00:33:08.295 "enable_quickack": false, 00:33:08.295 "enable_placement_id": 0, 00:33:08.295 "enable_zerocopy_send_server": true, 00:33:08.295 "enable_zerocopy_send_client": false, 00:33:08.295 "zerocopy_threshold": 0, 00:33:08.295 "tls_version": 0, 00:33:08.295 "enable_ktls": false 00:33:08.295 } 00:33:08.295 }, 00:33:08.295 { 00:33:08.295 "method": "sock_impl_set_options", 00:33:08.295 "params": { 00:33:08.295 "impl_name": "posix", 00:33:08.295 "recv_buf_size": 2097152, 00:33:08.295 "send_buf_size": 2097152, 00:33:08.295 "enable_recv_pipe": true, 00:33:08.295 "enable_quickack": false, 00:33:08.295 "enable_placement_id": 0, 00:33:08.295 "enable_zerocopy_send_server": true, 00:33:08.295 "enable_zerocopy_send_client": false, 00:33:08.295 "zerocopy_threshold": 0, 00:33:08.295 "tls_version": 0, 00:33:08.295 "enable_ktls": false 00:33:08.295 } 00:33:08.295 } 00:33:08.295 ] 00:33:08.295 }, 00:33:08.295 { 00:33:08.295 "subsystem": "vmd", 00:33:08.295 "config": [] 00:33:08.295 }, 00:33:08.295 { 00:33:08.295 "subsystem": "accel", 00:33:08.295 "config": [ 00:33:08.295 { 00:33:08.295 "method": "accel_set_options", 00:33:08.295 "params": { 00:33:08.295 "small_cache_size": 128, 00:33:08.295 "large_cache_size": 16, 00:33:08.295 "task_count": 2048, 00:33:08.295 "sequence_count": 2048, 00:33:08.295 "buf_count": 2048 00:33:08.295 } 00:33:08.295 } 00:33:08.295 ] 00:33:08.295 }, 00:33:08.295 { 00:33:08.295 "subsystem": "bdev", 00:33:08.295 "config": [ 00:33:08.295 { 00:33:08.295 "method": "bdev_set_options", 00:33:08.295 "params": { 00:33:08.295 "bdev_io_pool_size": 65535, 00:33:08.295 "bdev_io_cache_size": 256, 00:33:08.295 "bdev_auto_examine": true, 00:33:08.295 "iobuf_small_cache_size": 128, 00:33:08.295 "iobuf_large_cache_size": 16 00:33:08.295 } 00:33:08.295 }, 00:33:08.295 { 00:33:08.295 "method": "bdev_raid_set_options", 00:33:08.295 "params": { 00:33:08.295 "process_window_size_kb": 1024, 00:33:08.295 "process_max_bandwidth_mb_sec": 0 00:33:08.295 } 00:33:08.295 }, 00:33:08.295 { 00:33:08.295 "method": "bdev_iscsi_set_options", 00:33:08.295 "params": { 00:33:08.295 "timeout_sec": 30 00:33:08.295 } 00:33:08.295 }, 00:33:08.295 { 00:33:08.295 "method": "bdev_nvme_set_options", 00:33:08.295 "params": { 00:33:08.295 "action_on_timeout": "none", 00:33:08.295 "timeout_us": 0, 00:33:08.295 "timeout_admin_us": 0, 00:33:08.295 "keep_alive_timeout_ms": 10000, 00:33:08.295 "arbitration_burst": 0, 00:33:08.295 "low_priority_weight": 0, 00:33:08.295 "medium_priority_weight": 0, 00:33:08.295 "high_priority_weight": 0, 00:33:08.295 "nvme_adminq_poll_period_us": 10000, 00:33:08.295 "nvme_ioq_poll_period_us": 0, 00:33:08.295 "io_queue_requests": 512, 00:33:08.295 "delay_cmd_submit": true, 00:33:08.295 "transport_retry_count": 4, 00:33:08.295 "bdev_retry_count": 3, 00:33:08.295 "transport_ack_timeout": 0, 00:33:08.295 "ctrlr_loss_timeout_sec": 0, 00:33:08.295 "reconnect_delay_sec": 0, 00:33:08.295 "fast_io_fail_timeout_sec": 0, 00:33:08.295 "disable_auto_failback": false, 00:33:08.295 "generate_uuids": false, 00:33:08.295 "transport_tos": 0, 00:33:08.295 "nvme_error_stat": false, 00:33:08.295 "rdma_srq_size": 0, 00:33:08.295 "io_path_stat": false, 00:33:08.295 "allow_accel_sequence": false, 00:33:08.295 "rdma_max_cq_size": 0, 00:33:08.295 "rdma_cm_event_timeout_ms": 0, 00:33:08.295 "dhchap_digests": [ 00:33:08.295 "sha256", 00:33:08.295 "sha384", 00:33:08.295 "sha512" 00:33:08.295 ], 00:33:08.295 "dhchap_dhgroups": [ 00:33:08.295 "null", 00:33:08.295 "ffdhe2048", 00:33:08.295 "ffdhe3072", 00:33:08.295 "ffdhe4096", 00:33:08.295 "ffdhe6144", 00:33:08.295 "ffdhe8192" 00:33:08.295 ] 00:33:08.295 } 00:33:08.295 }, 00:33:08.295 { 00:33:08.295 "method": "bdev_nvme_attach_controller", 00:33:08.295 "params": { 00:33:08.296 "name": "nvme0", 00:33:08.296 "trtype": "TCP", 00:33:08.296 "adrfam": "IPv4", 00:33:08.296 "traddr": "127.0.0.1", 00:33:08.296 "trsvcid": "4420", 00:33:08.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:08.296 "prchk_reftag": false, 00:33:08.296 "prchk_guard": false, 00:33:08.296 "ctrlr_loss_timeout_sec": 0, 00:33:08.296 "reconnect_delay_sec": 0, 00:33:08.296 "fast_io_fail_timeout_sec": 0, 00:33:08.296 "psk": "key0", 00:33:08.296 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:08.296 "hdgst": false, 00:33:08.296 "ddgst": false, 00:33:08.296 "multipath": "multipath" 00:33:08.296 } 00:33:08.296 }, 00:33:08.296 { 00:33:08.296 "method": "bdev_nvme_set_hotplug", 00:33:08.296 "params": { 00:33:08.296 "period_us": 100000, 00:33:08.296 "enable": false 00:33:08.296 } 00:33:08.296 }, 00:33:08.296 { 00:33:08.296 "method": "bdev_wait_for_examine" 00:33:08.296 } 00:33:08.296 ] 00:33:08.296 }, 00:33:08.296 { 00:33:08.296 "subsystem": "nbd", 00:33:08.296 "config": [] 00:33:08.296 } 00:33:08.296 ] 00:33:08.296 }' 00:33:08.296 04:21:02 keyring_file -- keyring/file.sh@115 -- # killprocess 2584272 00:33:08.296 04:21:02 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2584272 ']' 00:33:08.296 04:21:02 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2584272 00:33:08.296 04:21:02 keyring_file -- common/autotest_common.sh@959 -- # uname 00:33:08.296 04:21:02 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:08.296 04:21:02 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2584272 00:33:08.296 04:21:02 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:08.296 04:21:02 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:08.296 04:21:02 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2584272' 00:33:08.296 killing process with pid 2584272 00:33:08.296 04:21:02 keyring_file -- common/autotest_common.sh@973 -- # kill 2584272 00:33:08.296 Received shutdown signal, test time was about 1.000000 seconds 00:33:08.296 00:33:08.296 Latency(us) 00:33:08.296 [2024-12-10T03:21:02.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.296 [2024-12-10T03:21:02.685Z] =================================================================================================================== 00:33:08.296 [2024-12-10T03:21:02.685Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:08.296 04:21:02 keyring_file -- common/autotest_common.sh@978 -- # wait 2584272 00:33:08.296 04:21:02 keyring_file -- keyring/file.sh@118 -- # bperfpid=2585970 00:33:08.296 04:21:02 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2585970 /var/tmp/bperf.sock 00:33:08.296 04:21:02 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2585970 ']' 00:33:08.296 04:21:02 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:08.296 04:21:02 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:08.296 04:21:02 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:08.296 04:21:02 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:33:08.296 "subsystems": [ 00:33:08.296 { 00:33:08.296 "subsystem": "keyring", 00:33:08.296 "config": [ 00:33:08.296 { 00:33:08.296 "method": "keyring_file_add_key", 00:33:08.296 "params": { 00:33:08.296 "name": "key0", 00:33:08.296 "path": "/tmp/tmp.7lIF2FkYyG" 00:33:08.296 } 00:33:08.296 }, 00:33:08.296 { 00:33:08.296 "method": "keyring_file_add_key", 00:33:08.296 "params": { 00:33:08.296 "name": "key1", 00:33:08.296 "path": "/tmp/tmp.6bYtHCSXes" 00:33:08.296 } 00:33:08.296 } 00:33:08.296 ] 00:33:08.296 }, 00:33:08.296 { 00:33:08.296 "subsystem": "iobuf", 00:33:08.296 "config": [ 00:33:08.296 { 00:33:08.296 "method": "iobuf_set_options", 00:33:08.296 "params": { 00:33:08.296 "small_pool_count": 8192, 00:33:08.296 "large_pool_count": 1024, 00:33:08.296 "small_bufsize": 8192, 00:33:08.296 "large_bufsize": 135168, 00:33:08.296 "enable_numa": false 00:33:08.296 } 00:33:08.296 } 00:33:08.296 ] 00:33:08.296 }, 00:33:08.296 { 00:33:08.296 "subsystem": "sock", 00:33:08.296 "config": [ 00:33:08.296 { 00:33:08.296 "method": "sock_set_default_impl", 00:33:08.296 "params": { 00:33:08.296 "impl_name": "posix" 00:33:08.296 } 00:33:08.296 }, 00:33:08.296 { 00:33:08.296 "method": "sock_impl_set_options", 00:33:08.296 "params": { 00:33:08.296 "impl_name": "ssl", 00:33:08.296 "recv_buf_size": 4096, 00:33:08.296 "send_buf_size": 4096, 00:33:08.296 "enable_recv_pipe": true, 00:33:08.296 "enable_quickack": false, 00:33:08.296 "enable_placement_id": 0, 00:33:08.296 "enable_zerocopy_send_server": true, 00:33:08.296 "enable_zerocopy_send_client": false, 00:33:08.296 "zerocopy_threshold": 0, 00:33:08.296 "tls_version": 0, 00:33:08.296 "enable_ktls": false 00:33:08.296 } 00:33:08.296 }, 00:33:08.296 { 00:33:08.296 "method": "sock_impl_set_options", 00:33:08.296 "params": { 00:33:08.296 "impl_name": "posix", 00:33:08.296 "recv_buf_size": 2097152, 00:33:08.296 "send_buf_size": 2097152, 00:33:08.296 "enable_recv_pipe": true, 00:33:08.296 "enable_quickack": false, 00:33:08.296 "enable_placement_id": 0, 00:33:08.296 "enable_zerocopy_send_server": true, 00:33:08.296 "enable_zerocopy_send_client": false, 00:33:08.296 "zerocopy_threshold": 0, 00:33:08.296 "tls_version": 0, 00:33:08.296 "enable_ktls": false 00:33:08.296 } 00:33:08.296 } 00:33:08.296 ] 00:33:08.296 }, 00:33:08.296 { 00:33:08.296 "subsystem": "vmd", 00:33:08.296 "config": [] 00:33:08.296 }, 00:33:08.296 { 00:33:08.296 "subsystem": "accel", 00:33:08.296 "config": [ 00:33:08.296 { 00:33:08.296 "method": "accel_set_options", 00:33:08.296 "params": { 00:33:08.296 "small_cache_size": 128, 00:33:08.296 "large_cache_size": 16, 00:33:08.296 "task_count": 2048, 00:33:08.296 "sequence_count": 2048, 00:33:08.296 "buf_count": 2048 00:33:08.296 } 00:33:08.296 } 00:33:08.296 ] 00:33:08.296 }, 00:33:08.296 { 00:33:08.296 "subsystem": "bdev", 00:33:08.296 "config": [ 00:33:08.296 { 00:33:08.296 "method": "bdev_set_options", 00:33:08.296 "params": { 00:33:08.296 "bdev_io_pool_size": 65535, 00:33:08.296 "bdev_io_cache_size": 256, 00:33:08.296 "bdev_auto_examine": true, 00:33:08.296 "iobuf_small_cache_size": 128, 00:33:08.296 "iobuf_large_cache_size": 16 00:33:08.296 } 00:33:08.296 }, 00:33:08.296 { 00:33:08.296 "method": "bdev_raid_set_options", 00:33:08.296 "params": { 00:33:08.296 "process_window_size_kb": 1024, 00:33:08.296 "process_max_bandwidth_mb_sec": 0 00:33:08.296 } 00:33:08.296 }, 00:33:08.296 { 00:33:08.296 "method": "bdev_iscsi_set_options", 00:33:08.296 "params": { 00:33:08.296 "timeout_sec": 30 00:33:08.296 } 00:33:08.296 }, 00:33:08.296 { 00:33:08.296 "method": "bdev_nvme_set_options", 00:33:08.296 "params": { 00:33:08.296 "action_on_timeout": "none", 00:33:08.296 "timeout_us": 0, 00:33:08.296 "timeout_admin_us": 0, 00:33:08.296 "keep_alive_timeout_ms": 10000, 00:33:08.296 "arbitration_burst": 0, 00:33:08.296 "low_priority_weight": 0, 00:33:08.296 "medium_priority_weight": 0, 00:33:08.296 "high_priority_weight": 0, 00:33:08.296 "nvme_adminq_poll_period_us": 10000, 00:33:08.296 "nvme_ioq_poll_period_us": 0, 00:33:08.296 "io_queue_requests": 512, 00:33:08.296 "delay_cmd_submit": true, 00:33:08.296 "transport_retry_count": 4, 00:33:08.296 "bdev_retry_count": 3, 00:33:08.296 "transport_ack_timeout": 0, 00:33:08.296 "ctrlr_loss_timeout_sec": 0, 00:33:08.296 "reconnect_delay_sec": 0, 00:33:08.296 "fast_io_fail_timeout_sec": 0, 00:33:08.296 "disable_auto_failback": false, 00:33:08.296 "generate_uuids": false, 00:33:08.296 "transport_tos": 0, 00:33:08.296 "nvme_error_stat": false, 00:33:08.296 "rdma_srq_size": 0, 00:33:08.296 "io_path_stat": false, 00:33:08.296 "allow_accel_sequence": false, 00:33:08.296 "rdma_max_cq_size": 0, 00:33:08.296 "rdma_cm_event_timeout_ms": 0, 00:33:08.296 "dhchap_digests": [ 00:33:08.296 "sha256", 00:33:08.296 "sha384", 00:33:08.296 "sha512" 00:33:08.296 ], 00:33:08.296 "dhchap_dhgroups": [ 00:33:08.296 "null", 00:33:08.296 "ffdhe2048", 00:33:08.296 "ffdhe3072", 00:33:08.296 "ffdhe4096", 00:33:08.296 "ffdhe6144", 00:33:08.296 "ffdhe8192" 00:33:08.296 ] 00:33:08.297 } 00:33:08.297 }, 00:33:08.297 { 00:33:08.297 "method": "bdev_nvme_attach_controller", 00:33:08.297 "params": { 00:33:08.297 "name": "nvme0", 00:33:08.297 "trtype": "TCP", 00:33:08.297 "adrfam": "IPv4", 00:33:08.297 "traddr": "127.0.0.1", 00:33:08.297 "trsvcid": "4420", 00:33:08.297 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:08.297 "prchk_reftag": false, 00:33:08.297 "prchk_guard": false, 00:33:08.297 "ctrlr_loss_timeout_sec": 0, 00:33:08.297 "reconnect_delay_sec": 0, 00:33:08.297 "fast_io_fail_timeout_sec": 0, 00:33:08.297 "psk": "key0", 00:33:08.297 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:08.297 "hdgst": false, 00:33:08.297 "ddgst": false, 00:33:08.297 "multipath": "multipath" 00:33:08.297 } 00:33:08.297 }, 00:33:08.297 { 00:33:08.297 "method": "bdev_nvme_set_hotplug", 00:33:08.297 "params": { 00:33:08.297 "period_us": 100000, 00:33:08.297 "enable": false 00:33:08.297 } 00:33:08.297 }, 00:33:08.297 { 00:33:08.297 "method": "bdev_wait_for_examine" 00:33:08.297 } 00:33:08.297 ] 00:33:08.297 }, 00:33:08.297 { 00:33:08.297 "subsystem": "nbd", 00:33:08.297 "config": [] 00:33:08.297 } 00:33:08.297 ] 00:33:08.297 }' 00:33:08.297 04:21:02 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:08.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:08.297 04:21:02 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:08.297 04:21:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:08.555 [2024-12-10 04:21:02.690442] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:33:08.555 [2024-12-10 04:21:02.690543] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2585970 ] 00:33:08.555 [2024-12-10 04:21:02.758774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.555 [2024-12-10 04:21:02.813122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.813 [2024-12-10 04:21:02.995690] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:08.813 04:21:03 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:08.813 04:21:03 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:33:08.813 04:21:03 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:33:08.813 04:21:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:08.813 04:21:03 keyring_file -- keyring/file.sh@121 -- # jq length 00:33:09.070 04:21:03 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:09.070 04:21:03 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:33:09.070 04:21:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:09.070 04:21:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:09.070 04:21:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:09.070 04:21:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:09.070 04:21:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:09.328 04:21:03 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:33:09.328 04:21:03 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:33:09.328 04:21:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:09.328 04:21:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:09.328 04:21:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:09.328 04:21:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:09.328 04:21:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:09.586 04:21:03 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:33:09.586 04:21:03 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:33:09.844 04:21:03 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:33:09.844 04:21:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:10.102 04:21:04 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:33:10.102 04:21:04 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:10.102 04:21:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.7lIF2FkYyG /tmp/tmp.6bYtHCSXes 00:33:10.102 04:21:04 keyring_file -- keyring/file.sh@20 -- # killprocess 2585970 00:33:10.102 04:21:04 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2585970 ']' 00:33:10.102 04:21:04 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2585970 00:33:10.102 04:21:04 keyring_file -- common/autotest_common.sh@959 -- # uname 00:33:10.102 04:21:04 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:10.102 04:21:04 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2585970 00:33:10.102 04:21:04 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:10.102 04:21:04 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:10.102 04:21:04 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2585970' 00:33:10.102 killing process with pid 2585970 00:33:10.102 04:21:04 keyring_file -- common/autotest_common.sh@973 -- # kill 2585970 00:33:10.103 Received shutdown signal, test time was about 1.000000 seconds 00:33:10.103 00:33:10.103 Latency(us) 00:33:10.103 [2024-12-10T03:21:04.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.103 [2024-12-10T03:21:04.492Z] =================================================================================================================== 00:33:10.103 [2024-12-10T03:21:04.492Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:10.103 04:21:04 keyring_file -- common/autotest_common.sh@978 -- # wait 2585970 00:33:10.360 04:21:04 keyring_file -- keyring/file.sh@21 -- # killprocess 2584267 00:33:10.360 04:21:04 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2584267 ']' 00:33:10.360 04:21:04 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2584267 00:33:10.360 04:21:04 keyring_file -- common/autotest_common.sh@959 -- # uname 00:33:10.360 04:21:04 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:10.360 04:21:04 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2584267 00:33:10.360 04:21:04 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:10.360 04:21:04 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:10.361 04:21:04 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2584267' 00:33:10.361 killing process with pid 2584267 00:33:10.361 04:21:04 keyring_file -- common/autotest_common.sh@973 -- # kill 2584267 00:33:10.361 04:21:04 keyring_file -- common/autotest_common.sh@978 -- # wait 2584267 00:33:10.619 00:33:10.619 real 0m14.733s 00:33:10.619 user 0m37.536s 00:33:10.619 sys 0m3.266s 00:33:10.619 04:21:04 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:10.619 04:21:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:10.619 ************************************ 00:33:10.619 END TEST keyring_file 00:33:10.619 ************************************ 00:33:10.619 04:21:04 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:33:10.619 04:21:04 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:10.619 04:21:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:10.619 04:21:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:10.619 04:21:04 -- common/autotest_common.sh@10 -- # set +x 00:33:10.881 ************************************ 00:33:10.881 START TEST keyring_linux 00:33:10.881 ************************************ 00:33:10.881 04:21:05 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:10.881 Joined session keyring: 949292290 00:33:10.881 * Looking for test storage... 00:33:10.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:10.881 04:21:05 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:10.881 04:21:05 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:33:10.881 04:21:05 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:10.881 04:21:05 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@345 -- # : 1 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@368 -- # return 0 00:33:10.881 04:21:05 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:10.881 04:21:05 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:10.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.881 --rc genhtml_branch_coverage=1 00:33:10.881 --rc genhtml_function_coverage=1 00:33:10.881 --rc genhtml_legend=1 00:33:10.881 --rc geninfo_all_blocks=1 00:33:10.881 --rc geninfo_unexecuted_blocks=1 00:33:10.881 00:33:10.881 ' 00:33:10.881 04:21:05 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:10.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.881 --rc genhtml_branch_coverage=1 00:33:10.881 --rc genhtml_function_coverage=1 00:33:10.881 --rc genhtml_legend=1 00:33:10.881 --rc geninfo_all_blocks=1 00:33:10.881 --rc geninfo_unexecuted_blocks=1 00:33:10.881 00:33:10.881 ' 00:33:10.881 04:21:05 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:10.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.881 --rc genhtml_branch_coverage=1 00:33:10.881 --rc genhtml_function_coverage=1 00:33:10.881 --rc genhtml_legend=1 00:33:10.881 --rc geninfo_all_blocks=1 00:33:10.881 --rc geninfo_unexecuted_blocks=1 00:33:10.881 00:33:10.881 ' 00:33:10.881 04:21:05 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:10.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.881 --rc genhtml_branch_coverage=1 00:33:10.881 --rc genhtml_function_coverage=1 00:33:10.881 --rc genhtml_legend=1 00:33:10.881 --rc geninfo_all_blocks=1 00:33:10.881 --rc geninfo_unexecuted_blocks=1 00:33:10.881 00:33:10.881 ' 00:33:10.881 04:21:05 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:10.881 04:21:05 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.881 04:21:05 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.881 04:21:05 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.881 04:21:05 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.881 04:21:05 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.881 04:21:05 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:10.881 04:21:05 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:10.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:10.881 04:21:05 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:10.881 04:21:05 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:10.881 04:21:05 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:10.881 04:21:05 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:10.881 04:21:05 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:10.881 04:21:05 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:10.881 04:21:05 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:10.881 04:21:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:10.881 04:21:05 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:10.881 04:21:05 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:10.881 04:21:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:10.881 04:21:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:10.881 04:21:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:33:10.881 04:21:05 keyring_linux -- nvmf/common.sh@733 -- # python - 00:33:10.881 04:21:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:10.881 04:21:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:10.881 /tmp/:spdk-test:key0 00:33:10.881 04:21:05 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:10.881 04:21:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:10.881 04:21:05 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:10.881 04:21:05 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:10.881 04:21:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:10.882 04:21:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:10.882 04:21:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:10.882 04:21:05 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:10.882 04:21:05 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:33:10.882 04:21:05 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:10.882 04:21:05 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:33:10.882 04:21:05 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:33:10.882 04:21:05 keyring_linux -- nvmf/common.sh@733 -- # python - 00:33:10.882 04:21:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:10.882 04:21:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:10.882 /tmp/:spdk-test:key1 00:33:10.882 04:21:05 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2586334 00:33:10.882 04:21:05 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:10.882 04:21:05 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2586334 00:33:10.882 04:21:05 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2586334 ']' 00:33:10.882 04:21:05 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.882 04:21:05 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.882 04:21:05 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.882 04:21:05 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.882 04:21:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:11.140 [2024-12-10 04:21:05.296917] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:33:11.140 [2024-12-10 04:21:05.296997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2586334 ] 00:33:11.140 [2024-12-10 04:21:05.363775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.140 [2024-12-10 04:21:05.421288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.398 04:21:05 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:11.398 04:21:05 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:33:11.398 04:21:05 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:11.398 04:21:05 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.398 04:21:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:11.398 [2024-12-10 04:21:05.708215] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.398 null0 00:33:11.398 [2024-12-10 04:21:05.740257] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:11.398 [2024-12-10 04:21:05.740783] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:11.398 04:21:05 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.398 04:21:05 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:11.398 839879865 00:33:11.398 04:21:05 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:11.398 670208457 00:33:11.398 04:21:05 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2586350 00:33:11.398 04:21:05 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2586350 /var/tmp/bperf.sock 00:33:11.398 04:21:05 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:11.398 04:21:05 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2586350 ']' 00:33:11.398 04:21:05 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:11.398 04:21:05 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:11.398 04:21:05 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:11.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:11.398 04:21:05 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:11.398 04:21:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:11.656 [2024-12-10 04:21:05.810068] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:33:11.656 [2024-12-10 04:21:05.810143] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2586350 ] 00:33:11.656 [2024-12-10 04:21:05.880733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.656 [2024-12-10 04:21:05.941321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.913 04:21:06 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:11.913 04:21:06 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:33:11.913 04:21:06 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:11.913 04:21:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:12.170 04:21:06 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:12.170 04:21:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:12.428 04:21:06 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:12.428 04:21:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:12.686 [2024-12-10 04:21:06.914711] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:12.686 nvme0n1 00:33:12.686 04:21:07 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:12.686 04:21:07 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:12.686 04:21:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:12.686 04:21:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:12.686 04:21:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:12.686 04:21:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:12.944 04:21:07 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:12.944 04:21:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:12.944 04:21:07 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:12.944 04:21:07 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:12.944 04:21:07 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:12.944 04:21:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:12.944 04:21:07 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:13.510 04:21:07 keyring_linux -- keyring/linux.sh@25 -- # sn=839879865 00:33:13.510 04:21:07 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:13.510 04:21:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:13.510 04:21:07 keyring_linux -- keyring/linux.sh@26 -- # [[ 839879865 == \8\3\9\8\7\9\8\6\5 ]] 00:33:13.510 04:21:07 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 839879865 00:33:13.510 04:21:07 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:13.510 04:21:07 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:13.510 Running I/O for 1 seconds... 00:33:14.443 10143.00 IOPS, 39.62 MiB/s 00:33:14.443 Latency(us) 00:33:14.443 [2024-12-10T03:21:08.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.443 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:14.443 nvme0n1 : 1.01 10143.88 39.62 0.00 0.00 12535.81 6262.33 17767.54 00:33:14.443 [2024-12-10T03:21:08.832Z] =================================================================================================================== 00:33:14.443 [2024-12-10T03:21:08.832Z] Total : 10143.88 39.62 0.00 0.00 12535.81 6262.33 17767.54 00:33:14.443 { 00:33:14.443 "results": [ 00:33:14.443 { 00:33:14.443 "job": "nvme0n1", 00:33:14.443 "core_mask": "0x2", 00:33:14.443 "workload": "randread", 00:33:14.443 "status": "finished", 00:33:14.443 "queue_depth": 128, 00:33:14.443 "io_size": 4096, 00:33:14.443 "runtime": 1.012532, 00:33:14.443 "iops": 10143.876934259855, 00:33:14.443 "mibps": 39.62451927445256, 00:33:14.443 "io_failed": 0, 00:33:14.443 "io_timeout": 0, 00:33:14.443 "avg_latency_us": 12535.805068135021, 00:33:14.443 "min_latency_us": 6262.328888888889, 00:33:14.443 "max_latency_us": 17767.53777777778 00:33:14.443 } 00:33:14.443 ], 00:33:14.443 "core_count": 1 00:33:14.443 } 00:33:14.443 04:21:08 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:14.443 04:21:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:14.701 04:21:09 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:14.701 04:21:09 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:14.701 04:21:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:14.701 04:21:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:14.701 04:21:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:14.701 04:21:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:14.959 04:21:09 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:14.959 04:21:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:14.959 04:21:09 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:14.959 04:21:09 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:14.959 04:21:09 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:33:14.959 04:21:09 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:14.959 04:21:09 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:14.959 04:21:09 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:14.959 04:21:09 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:14.959 04:21:09 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:14.959 04:21:09 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:14.959 04:21:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:15.217 [2024-12-10 04:21:09.574237] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:15.217 [2024-12-10 04:21:09.574374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x253ff20 (107): Transport endpoint is not connected 00:33:15.217 [2024-12-10 04:21:09.575366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x253ff20 (9): Bad file descriptor 00:33:15.217 [2024-12-10 04:21:09.576365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:15.217 [2024-12-10 04:21:09.576384] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:15.217 [2024-12-10 04:21:09.576406] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:15.217 [2024-12-10 04:21:09.576419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:15.217 request: 00:33:15.217 { 00:33:15.217 "name": "nvme0", 00:33:15.217 "trtype": "tcp", 00:33:15.217 "traddr": "127.0.0.1", 00:33:15.217 "adrfam": "ipv4", 00:33:15.217 "trsvcid": "4420", 00:33:15.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:15.217 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:15.217 "prchk_reftag": false, 00:33:15.217 "prchk_guard": false, 00:33:15.217 "hdgst": false, 00:33:15.217 "ddgst": false, 00:33:15.217 "psk": ":spdk-test:key1", 00:33:15.217 "allow_unrecognized_csi": false, 00:33:15.217 "method": "bdev_nvme_attach_controller", 00:33:15.217 "req_id": 1 00:33:15.217 } 00:33:15.217 Got JSON-RPC error response 00:33:15.217 response: 00:33:15.217 { 00:33:15.217 "code": -5, 00:33:15.217 "message": "Input/output error" 00:33:15.217 } 00:33:15.217 04:21:09 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:33:15.217 04:21:09 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:15.217 04:21:09 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:15.217 04:21:09 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:15.217 04:21:09 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:15.217 04:21:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:15.217 04:21:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:15.217 04:21:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:15.217 04:21:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:15.217 04:21:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:15.217 04:21:09 keyring_linux -- keyring/linux.sh@33 -- # sn=839879865 00:33:15.217 04:21:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 839879865 00:33:15.217 1 links removed 00:33:15.217 04:21:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:15.217 04:21:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:15.217 04:21:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:15.217 04:21:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:15.217 04:21:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:15.475 04:21:09 keyring_linux -- keyring/linux.sh@33 -- # sn=670208457 00:33:15.475 04:21:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 670208457 00:33:15.475 1 links removed 00:33:15.475 04:21:09 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2586350 00:33:15.475 04:21:09 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2586350 ']' 00:33:15.475 04:21:09 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2586350 00:33:15.475 04:21:09 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:33:15.475 04:21:09 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:15.475 04:21:09 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2586350 00:33:15.475 04:21:09 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:15.475 04:21:09 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:15.475 04:21:09 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2586350' 00:33:15.475 killing process with pid 2586350 00:33:15.475 04:21:09 keyring_linux -- common/autotest_common.sh@973 -- # kill 2586350 00:33:15.475 Received shutdown signal, test time was about 1.000000 seconds 00:33:15.475 00:33:15.475 Latency(us) 00:33:15.475 [2024-12-10T03:21:09.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.475 [2024-12-10T03:21:09.864Z] =================================================================================================================== 00:33:15.475 [2024-12-10T03:21:09.864Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:15.475 04:21:09 keyring_linux -- common/autotest_common.sh@978 -- # wait 2586350 00:33:15.475 04:21:09 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2586334 00:33:15.475 04:21:09 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2586334 ']' 00:33:15.475 04:21:09 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2586334 00:33:15.475 04:21:09 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:33:15.475 04:21:09 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:15.475 04:21:09 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2586334 00:33:15.733 04:21:09 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:15.733 04:21:09 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:15.733 04:21:09 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2586334' 00:33:15.733 killing process with pid 2586334 00:33:15.733 04:21:09 keyring_linux -- common/autotest_common.sh@973 -- # kill 2586334 00:33:15.733 04:21:09 keyring_linux -- common/autotest_common.sh@978 -- # wait 2586334 00:33:15.991 00:33:15.991 real 0m5.244s 00:33:15.991 user 0m10.421s 00:33:15.991 sys 0m1.667s 00:33:15.991 04:21:10 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:15.991 04:21:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:15.991 ************************************ 00:33:15.991 END TEST keyring_linux 00:33:15.991 ************************************ 00:33:15.991 04:21:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:15.991 04:21:10 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:15.992 04:21:10 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:15.992 04:21:10 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:33:15.992 04:21:10 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:15.992 04:21:10 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:15.992 04:21:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:15.992 04:21:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:15.992 04:21:10 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:15.992 04:21:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:15.992 04:21:10 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:15.992 04:21:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:15.992 04:21:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:15.992 04:21:10 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:15.992 04:21:10 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:15.992 04:21:10 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:15.992 04:21:10 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:15.992 04:21:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:15.992 04:21:10 -- common/autotest_common.sh@10 -- # set +x 00:33:15.992 04:21:10 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:15.992 04:21:10 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:15.992 04:21:10 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:15.992 04:21:10 -- common/autotest_common.sh@10 -- # set +x 00:33:17.892 INFO: APP EXITING 00:33:17.892 INFO: killing all VMs 00:33:17.892 INFO: killing vhost app 00:33:17.892 INFO: EXIT DONE 00:33:19.269 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:33:19.269 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:33:19.269 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:33:19.269 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:33:19.269 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:33:19.269 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:33:19.269 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:33:19.269 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:33:19.269 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:33:19.269 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:33:19.269 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:33:19.269 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:33:19.269 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:33:19.269 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:33:19.269 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:33:19.269 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:33:19.269 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:33:20.644 Cleaning 00:33:20.644 Removing: /var/run/dpdk/spdk0/config 00:33:20.644 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:20.644 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:20.644 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:20.644 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:20.644 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:20.644 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:20.644 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:20.644 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:20.644 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:20.644 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:20.644 Removing: /var/run/dpdk/spdk1/config 00:33:20.644 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:20.644 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:20.644 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:20.644 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:20.644 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:20.644 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:20.644 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:20.644 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:20.644 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:20.644 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:20.644 Removing: /var/run/dpdk/spdk2/config 00:33:20.644 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:20.644 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:20.644 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:20.644 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:20.644 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:20.644 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:20.644 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:20.644 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:20.644 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:20.644 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:20.644 Removing: /var/run/dpdk/spdk3/config 00:33:20.644 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:20.644 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:20.644 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:20.644 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:20.644 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:20.644 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:20.644 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:20.644 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:20.644 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:20.644 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:20.644 Removing: /var/run/dpdk/spdk4/config 00:33:20.644 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:20.644 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:20.644 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:20.644 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:20.644 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:20.644 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:20.644 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:20.644 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:20.644 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:20.644 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:20.644 Removing: /dev/shm/bdev_svc_trace.1 00:33:20.644 Removing: /dev/shm/nvmf_trace.0 00:33:20.644 Removing: /dev/shm/spdk_tgt_trace.pid2264750 00:33:20.644 Removing: /var/run/dpdk/spdk0 00:33:20.644 Removing: /var/run/dpdk/spdk1 00:33:20.644 Removing: /var/run/dpdk/spdk2 00:33:20.644 Removing: /var/run/dpdk/spdk3 00:33:20.644 Removing: /var/run/dpdk/spdk4 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2263071 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2263813 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2264750 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2265082 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2265775 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2265919 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2266633 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2266743 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2267023 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2268227 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2269148 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2269465 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2269659 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2269878 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2270134 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2270347 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2270505 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2270698 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2270961 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2273385 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2273665 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2273831 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2273839 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2274265 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2274273 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2274655 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2274713 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2274882 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2275008 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2275178 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2275191 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2275681 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2275839 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2276050 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2278275 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2280812 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2288434 00:33:20.644 Removing: /var/run/dpdk/spdk_pid2288951 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2291365 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2291640 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2294273 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2298006 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2300199 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2306613 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2311856 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2313058 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2313727 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2324723 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2327151 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2354975 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2358270 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2362720 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2366983 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2366991 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2367647 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2368184 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2368838 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2369240 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2369246 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2369507 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2369574 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2369647 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2370223 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2370835 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2371494 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2371895 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2371897 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2372162 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2373060 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2373794 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2379124 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2407060 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2409973 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2411184 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2413104 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2413246 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2413383 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2413526 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2413978 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2415309 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2416150 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2416574 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2418182 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2418499 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2419053 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2421444 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2424851 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2424852 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2424853 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2427075 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2431927 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2434598 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2438359 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2439315 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2440405 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2441498 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2444374 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2447407 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2449709 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2453943 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2454009 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2456857 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2456991 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2457238 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2457514 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2457521 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2460285 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2460625 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2463296 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2465269 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2468714 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2472321 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2478932 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2483398 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2483408 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2496140 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2496645 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2497084 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2497488 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2498071 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2498475 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2498886 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2499420 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2501816 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2502079 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2505882 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2506052 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2509422 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2511921 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2519458 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2519970 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2522367 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2522637 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2525145 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2528959 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2530993 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2537368 00:33:20.904 Removing: /var/run/dpdk/spdk_pid2542576 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2543760 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2544425 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2554834 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2557595 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2559590 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2564661 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2564666 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2567566 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2568977 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2570415 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2571233 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2572634 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2573454 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2578879 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2579184 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2579575 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2581133 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2581532 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2581809 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2584267 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2584272 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2585970 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2586334 00:33:21.192 Removing: /var/run/dpdk/spdk_pid2586350 00:33:21.192 Clean 00:33:21.192 04:21:15 -- common/autotest_common.sh@1453 -- # return 0 00:33:21.192 04:21:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:21.192 04:21:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:21.192 04:21:15 -- common/autotest_common.sh@10 -- # set +x 00:33:21.192 04:21:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:21.192 04:21:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:21.192 04:21:15 -- common/autotest_common.sh@10 -- # set +x 00:33:21.192 04:21:15 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:21.192 04:21:15 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:21.192 04:21:15 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:21.192 04:21:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:21.192 04:21:15 -- spdk/autotest.sh@398 -- # hostname 00:33:21.192 04:21:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:21.476 geninfo: WARNING: invalid characters removed from testname! 00:33:53.560 04:21:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:56.102 04:21:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:59.397 04:21:53 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:02.693 04:21:56 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:05.235 04:21:59 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:08.533 04:22:02 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:11.075 04:22:05 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:11.075 04:22:05 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:11.075 04:22:05 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:34:11.075 04:22:05 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:11.075 04:22:05 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:11.075 04:22:05 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:11.075 + [[ -n 2192607 ]] 00:34:11.075 + sudo kill 2192607 00:34:11.086 [Pipeline] } 00:34:11.100 [Pipeline] // stage 00:34:11.105 [Pipeline] } 00:34:11.119 [Pipeline] // timeout 00:34:11.122 [Pipeline] } 00:34:11.134 [Pipeline] // catchError 00:34:11.138 [Pipeline] } 00:34:11.149 [Pipeline] // wrap 00:34:11.153 [Pipeline] } 00:34:11.164 [Pipeline] // catchError 00:34:11.171 [Pipeline] stage 00:34:11.173 [Pipeline] { (Epilogue) 00:34:11.184 [Pipeline] catchError 00:34:11.186 [Pipeline] { 00:34:11.198 [Pipeline] echo 00:34:11.199 Cleanup processes 00:34:11.205 [Pipeline] sh 00:34:11.492 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:11.492 2597520 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:11.506 [Pipeline] sh 00:34:11.792 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:11.792 ++ grep -v 'sudo pgrep' 00:34:11.792 ++ awk '{print $1}' 00:34:11.792 + sudo kill -9 00:34:11.792 + true 00:34:11.804 [Pipeline] sh 00:34:12.091 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:22.101 [Pipeline] sh 00:34:22.390 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:22.390 Artifacts sizes are good 00:34:22.404 [Pipeline] archiveArtifacts 00:34:22.412 Archiving artifacts 00:34:22.590 [Pipeline] sh 00:34:22.921 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:22.937 [Pipeline] cleanWs 00:34:22.947 [WS-CLEANUP] Deleting project workspace... 00:34:22.947 [WS-CLEANUP] Deferred wipeout is used... 00:34:22.954 [WS-CLEANUP] done 00:34:22.956 [Pipeline] } 00:34:22.973 [Pipeline] // catchError 00:34:22.984 [Pipeline] sh 00:34:23.267 + logger -p user.info -t JENKINS-CI 00:34:23.275 [Pipeline] } 00:34:23.289 [Pipeline] // stage 00:34:23.294 [Pipeline] } 00:34:23.308 [Pipeline] // node 00:34:23.313 [Pipeline] End of Pipeline 00:34:23.374 Finished: SUCCESS